- Home
- Our Firm
- Locations
- Legal Services
- Birth Injuries
- Apgar Scores
- Abnormal Birth
- Cortical Blindness
- Hydrocephalus
- Midwife Malpractice
- Preterm Labor Negligence
- Birth Paralysis
- Delivery by Forceps or Vacuum Extraction
- Hypoxic-Ischemic Encephalopathy (HIE)
- Neonatal Hypoxia
- Retinopathy Prematurity
- Brachial Plexus Palsy
- Developmental Delays from Birth Malpractice
- Infant Resuscitation Errors
- Neonatal Therapeutic Hypothermia
- Shoulder Dystocia
- Brain Damage/Head Trauma
- Erb’s Palsy
- Infant Wrongful Death
- NICU Malpractice
- Subgaleal Hemorrhage
- C Section Cases
- Facial Paralysis
- IUGR/Intrauterine Growth Restriction
- Nuchal Cord Malpractice
- Torticollis (Wry Neck)
- Cephalohematoma
- Fetal Acidosis
- Kernicterus
- OB-GYN Malpractice
- Uterine Rupture
- Cephalopelvic Disproportion
- Fetal Distress
- Klumpke’s Palsy
- Periventricular Leukomalacia
- Spacer
- Cerebral Palsy
- Fetal Monitoring Malpractice
- Macrosomia
- Placental Abruption
- Spacer
- Clavicle Fracture
- Group B Streptococcus
- Meconium Aspiration Syndrome
- Preeclampsia
- Free Consultation
The concept of medical responsibility is historically entrenched, with first mentions dating to the fabled Code of Hammurabi, which famously established the “eye for an eye” maxim. The code arguably offers the founding statement of medical malpractice law, reading “If the doctor has treated a gentlemen with a lancet of bronze and has caused the gentleman to die, or has opened an abscess of the eye for a gentleman with a bronze lancet, and has caused the loss of the gentleman’s eye, one shall cut off his hands.” Millennia later, “lancet” would become synonymous with the of concept medical responsibility in highbrow intellectual communities, and synonymous with medical malpractice itself. A famed British medical journal, The Lancet, borrows its name from the ancient code’s provision. Britain would unwittingly spearhead efforts to legislate medical malpractice, establishing nomenclature and court decisions that would go on to become the ancestors of modern malpractice law.
During the formative centuries of English common law after the critical Battle of Hastings in 1066, medical malpractice legislation began taking shape. The Court of Common Law shows several medical malpractice decisions on record. An 1164 case, Everad v. Hopkins saw a servant and his master collect damages against a physician for practicing “unwholesome medicine.” The 1374 case Stratton v Swanlond is frequently cited as the “fourteenth-century ancestor” of medical malpractice law. Chief Justice John Cavendish presided over the case, in which one Agnes of Stratton and her husband sued surgeon John Swanlond for breach of contract after he failed to treat and cure her severely mangled hand. Stratton saw her case ultimately dismissed due to an error in the Writ of Complaint, however, the case served as a crucial cornerstone in setting certain standards of medical care.
Cavendish ruled that a physician could be held liable if and when they harmed a patient as a result of negligence while stipulating that a physician who diligently adhered to the standard of care would not be liable even if he accomplished no cure. A legal precursor to expert testimony came in 1532, when a law passed under the reign of Holy Roman Emperor Charles V, requiring the opinion of medical men to be taken in cases of violent death. In 1768, Sir William Blackstone penned Commentaries on the Laws of England, in which he recruited the Latin term “mala praxis” to describe the concept of professional negligence, or ‘tort’ in modern parlance. Blackstone noted that mala praxis “breaks the trust which the party had placed in his physician, and tends to the patient’s destruction.” The proper term of ‘malpractice’ was coined sometime thereafter, deriving from Blackstone’s work.
The Fundamental Elements of Medical Malpractice
The fundamental elements of litigated medical malpractice are, above all, duty and negligence. Historic efforts define these two elements were muddled – fourteenth-century law under Henry V held that the physician owed a duty of care to the patient because medicine was a “common calling” (a profession), and required physicians to exercise care and prudence. Those in other professions who did not practice a “common calling” were liable only if an express promise had been made to achieve or avoid a certain result. In the absence of such a promise, the professional could not be held liable. Physicians, then, were held to a separate standard because of the nature of their profession. Modern notions of negligence are parallel to what history called the “carelessness” of early physicians. The notion of duty was legally elucidated in British common law. Carelessness and neglect were not in and of themselves causes of action lest the practitioner by nature of their profession had a duty to the person to whom they rendered care. The law determined that medical professionals were legally bound by a duty of care to their patients. Negligence was thereby grounds for legal action. The establishment of duty and negligence laid the foundation for Anglo-American legislation of medical malpractice.
The 18th and 19th centuries saw an ebb and flow between patients and physicians respective rights in the area of medical liability, alternating who held the upper hand. One of the first courses of action was defining the emerging concept of ‘standard’ or ‘duty of care.’ Both standard of care and the logical foundation of ‘expert testimony’ are derived from the notion that there is a professional custom. This means the standard of care a physician owes the patient is not necessarily defined by a common rationale or legal sensibility, but by what other physicians deem “customary” for their profession. Therefore, other medical professionals must agree that a defendant professional “contravened customary practice” in order to constitute legal transgression. This allowed medical professionals to set the legal standard for their own behavior. They were bound to a standard of care because they practiced a ‘common calling’ and possessed a supposed shared knowledge of best practices. In early British common law, this principle was contained in the ‘rule of locality,’ which held that physicians were bound to their self-set standard, but only by those professionals within their geographic region, or “locality.” This has evolved, where modern law does not esteem geographic locality but requires that all medical professionals in the same practice area be bound to the same standard. Only a physician in the same practice area may judge that another professional has breached the standard of care in that profession. A 1769 lawsuit in England, Slater v. Baker set about defining the standard by which a physician’s conduct could be measured and compared, while still enforcing the arbitrary requirement that a physician may only be found liable if a fellow physician from the defendant’s same geographic region found that the standard of care was breached. The locality rule with regards to geography was eventually scrapped in Anglo-American law, but the locality of practice area remained intact.
A 19th century Connecticut medical malpractice case established that “want of ordinary diligence, care, and skill” was grounds for a verdict in favor of the plaintiff. The language employed in this court decision became the de facto standard for other states in legislating medical malpractice suits.
The First Medical Malpractice Cases
The first medical malpractice cases in the United States centered around a breach of contract and not failure to adhere to a standard of care. This meant that the defendant physician made some sort of express promise to skillfully render care and obtain a good result. Failure to do so was grounds for a suit. Five years after George Washington’s inauguration, the country saw its first recorded medical malpractice lawsuit. A man sued the surgeon who operated on his wife and caused her to die, despite having made promises to the two that he would operate skillfully and safely. This breach of contract case resulted in a plaintiff verdict and an award of 40 pounds.
A fair articulation as to why medical professionals self-define the standard of care was mentioned in the Wisconsin Law Review, which stated: “…medical practice, being highly complex, is not susceptible to evaluation through ordinary common sense and must instead be assessed pursuant to the customs of those with experience.”
Asking a lay juror to determine negligence in a field as nuanced and complex as medicine proved to be problematic. This issue was alleviated by formalizing the requirement of expert witnesses to assist the lay juror. On the issue, the Wisconsin Law Review wrote “The complexity of any technical field, medicine included, may well disable a lay juror who seeks independently to assess the relative risks and benefits attending a given course of conduct. That, however, only means that the juror needs advice from experts (genuine experts)’ who can identify the risks and benefits at issue. Thus informed, there is no reason that a juror cannot and should not pass on the appropriateness of anyone’s conduct, including a physician’s.”
Prominent physicians Nathan Smith and R.E. Griffith of Yale and the University of Pennsylvania respectively held the belief that medical malpractice lawsuits were beneficial and necessary, serving as a tool of accountability in a profession that was poorly regulated. The American Medical Association (AMA) was founded in 1847 with the goal of promoting standardization of the profession, as well as elevating the standing of physicians in society. At the time, the vast majority of suits stemmed from orthopedic malpractice and deformations that resulted from botched amputations. As physicians sought to raise their own standards, higher patient expectations ensued. With the arrival of liability insurance for physicians, medical malpractice suits shot up in the States in the late 19th century.
Prior to his presidency, Abraham Lincoln was a distinguished medical malpractice attorney, taking on cases for physicians and patients alike. Lincoln represented two defendant physicians who treated a man when a chimney fell on him. The physicians applied splints to the patient’s legs, assuming he would not survive his injuries. The patient survived and was left with a crooked right leg when the splints were removed. The man recruited six attorneys, 15 physician witnesses and 21 other witnesses in his suit against the two physicians. Lincoln presented the town’s only other 12 physicians. Harking to the modern statute of limitations and the importance of fresh and compelling evidence, Lincoln believed the best defense was the passage of time and so he obtained many postponements. The trial resulted in a hung jury.
Religion and Medical Malpractice
A steady uptick in medical malpractice cases can be attributed, in part at least, to the decline of religious fatalism. It was a pervasive belief that misfortune and injury were acts of God, meant to be construed as punishment for moral and religious transgressions. Overturning this belief may be considered a far-off ripple effect of The Enlightenment, a historical ‘moment’ at which prominent European thinkers began to reject the notion that everything was determined by the will of an omnipotent God. As philosophers and scientists alike began to promulgate the idea that willful human action was the true determinant of fortune and misfortune, a fringe effect was the rise of medical malpractice litigation, a century or so later. As people began to accept that injury and misfortune could be attributed to human error and not God’s will, they began to assert an entitlement to recompense if they suffered as a result of human error. This was a brick in the foundation of medical malpractice litigation.
A 1950’s court decision in England produced what is commonly referred to as the Bolam test. Bolam laid the groundwork for an informal three-pronged test employed in the UK and the US alike. The Lancet wrote, “Since Bolam, modern medical negligence law can be whittled down to three fundamental factors: one, confirming the patient was “owed a legal duty of care” by the health practitioner who is the “defendant” in cases of medical negligence; two, establishing that the defendant was in “breach” of that duty of care in failing to reach the standard of care required by law; three, proving that this breach of duty caused or contributed to the damage or injury to the patient.” These are the elements a patient must prove in order to win a malpractice case today. A breach of standard alone is “meaningless” with regards to liability unless it proximately results in injury to the patient.
Duty of care was established not with patient’s rights in mind per se, rather it was founded in, as worded by historian Harvey Teff, “the mystique of medicine and the strength of its professionalization.” The common layperson can not and will not comprehend the intricacies of medicine, so no objective standard may be set by non-medical professionals.
The 1960’s were a critical moment in the history of medical malpractice litigation in the US. The frequency of suits saw an enormous uptick. Contributing factors included new, complex treatments which allowed for more error or injury; what the AMA described as a “changing legal landscape that removed barriers to lawsuits and changed liability rules”; and finally changes in satisfaction with health care. This caused medical professionals to lobby for federal intervention in the realm of medical tort litigation. Legislators attempt to take an evenhanded approach that would excessively favor neither plaintiff nor defendant. As every state is afforded the right to legislate medical malpractice laws independently without federal oversight, the approach differs from state to state. There are two competing schools of thought that weigh into the manner of legislation regarding medical malpractice. The American Medical Association writes “Physicians and physician organizations have tended to view most medical malpractice claims as spurious and injurious to the medical system, whereas patient advocates view the malpractice system as both a deterrent against the practice of dangerous medicine and an avenue for much-deserved compensation for injured patients.”
The stakes grew higher as damage awards grew exponentially and kept in pace with inflation. Birth injury malpractice cases between widespread as the link between blatant physician error and cerebral palsy became clear. Five of the ten highest paid claims of all time were cerebral palsy suits, for which the plaintiffs won multimillion dollar awards. Plaintiffs became entitled to both economic and noneconomic losses. Economic losses are the quantifiable monetary losses associated with the injury incurred by the defendant’s negligence. Noneconomic damages are the unquantifiable emotional losses for pain, suffering and loss of enjoyment of life among other emotional hardships. As juries began to award substantial damages to injured plaintiffs, liability insurance for physicians increased. Physicians and other medical professionals passed these costs along to patients, resulting in higher costs for healthcare. Accessibility to health care was then directly affected by medical malpractice litigation. Throughout the latter half of the 20th century, many states introduced medical malpractice reform acts. Battling the question of whether to favor plaintiff or defendant, states began to impose what is known “damage caps,” which very widely between each state. Damage caps limit the amount of money a plaintiff can collect should they win their malpractice case. Some states impose no limit at all because such limitations are constitutionally prohibited. Other states have taken a long, hard look at the question of damage caps, assessing what numerical figure does not deprive the plaintiff of rightful compensation and is not unjustly punitive to the defendant. The lowest caps sit in the neighborhood $250,000, while the highest caps are in the neighborhood of $2.5 million. A handful of states adopted the use of a medical malpractice fund, to which all physicians in that state must contribute. The fund will pay damages in medical malpractice claims after the physician’s insurance covers the first $1 million. This way, physicians need only insurance that covers up to $1 million dollars and no more. This is meant to bring down insurance premiums for medical professionals. To a minor extent, damage caps influence the state a medical professional will choose to practice in, although it is not a huge bearing in their decision. Some states allow for punitive damages, which must be paid by the defendant as punishment but which are not awarded to the plaintiff.
As fear over “spurious claims” grew, and the lucrative nature of malpractice payouts became clear, legislation began to account for the concept of shared fault in medical malpractice claims. Many states arrived at the conclusion that a medical professional was not always exclusively responsible for the injury incurred. The doctrines of contributory and comparative fault allow the jury to assess the claim and assign a correct amount of blame to plaintiff as well as the defendant. Allowing fault to be shared promotes responsibility for both parties.
The 1960’s and 1970’s also saw the emergence of the doctrine of informed consent. Modern medicine requires that medical professionals disclose all of the associated risks that accompany a given procedure. This way, if a treatment or procedure entails serious or deterrent risk, the patient may make an informed personal decision to refuse it, such is their right. During these two decades, it became a fundamental tenant of biomedical ethics that a patient is informed of all the risks in a procedure. Failure to warn patients of possible adverse outcomes could become an additional source of liability for physicians and medical professionals. Legislatures eventually got down to the task of explicitly defining what information must be disclosed, and what constitute a “lack” of informed consent. The definition tiptoed around the issues of emergency care, patient-provider relationships, “common” knowledge, consent on behalf of a minor, and whether a given risk would deter a “reasonable” person from accepting treatment. Lawmakers set about drafting ironclad informed consent law that covered the ifs, ands and buts of most conceivable situations that required informed medical consent. In the same era, courts discarded the doctrine of charitable immunity which had previously immunized charitable institutions from suit.
Medical Malpractice and Liability Insurance
Liability insurance eventually took its seat as a crucial player in medical malpractice suits. The Massachusetts Medical Insurance Society, founded in 1908, was among the first to provide and make mention of insurance against “unjust suits for alleged malpractice” in 1919. On one hand, the nascent brand of insurance offered physicians peace of mind; settlements and damages would be covered. On the other hand, it served to assure plaintiffs that every meritorious claim should be brought forward, as that claim would almost certainly see payment.
The medical industry uniquely benefits from broad autonomy and self-regulation. Standardization of care and general oversight work to balance physician autonomy, and some may say they even erode that autonomy to an extent. Health Maintenance Organizations (HMOs) enforce patterns of practice to which providers must adhere. Emerging technologies throughout the 20th century paved the way for new treatment methods, but they also “raised patient expectations [while] multiplying the possibilities for mishaps.” In an examination of the interplay of autonomy and oversight, the Drexel Law Review wrote “Standardization and oversight serve to further reinforce patient expectations. By way of contrast, a disorganized profession typified by idiosyncratic practices discourages perceptions of consistent quality. Formal organization of the medical profession was intended, in part, to counter this characterization.”
As the field of medicine has advanced in capability and courage, so have the scope of possible mishaps, and throughout the course of medical malpractice history, there have been some veritably unbelievable cases. Cerebral palsy resulting from mistakes in the birthing process has been seen a number of times, and almost invariably results in enormous payouts. One mother was awarded $74.5 million after her child was born with cerebral palsy and her physicians falsified records to cover up wrongdoing.
Today, medical malpractice claims rarely go to trial, as the disputing parties usually reach a settlement by means of arbitration, formal or informal. After the injured party files a claim and recruits legal representation, the critical process of discovery begins.