Even though we have been looking forward to completely autonomous vehicles in Baltimore for some time now, the developments in self-driving technology over the past few years have pulled the horizon much closer. Whereas autonomous vehicles used to be a technology of the vague future, it is becoming clear that we may see self-driving cars on the road within the next decade, or at least at some point in this generation.
However, not every aspect of autonomous vehicles is going to be a boon for society. While there are the environmental benefits to having computers control how the vehicle moves and a huge difference in the stress of a daily commute, autonomous vehicles are also going to shake up society in ways that are less beneficial.
One complication will be in how the law will have to adapt to the pressing issues of compensating innocent victims of autonomous vehicle accidents in Baltimore.
How Autonomous Vehicles are Supposed to Work
The end goal of the technology going into self-driving cars is to make them completely autonomous, allowing them to take control of all of the logistics of getting from Point A to Point B. Autonomous vehicle technology aims to allow drivers to get into a self-driving car, tell it where to go, and do nothing else until they have reached their destination.
Reaching this point, though, is not easy. Lots of different technologies go into how autonomous vehicles work, and each component of an autonomous vehicle's performance comes with some incredibly nuanced challenges. All of the different facets and challenges in autonomous vehicles, though, can be distilled into four categories:
- Determining exactly where the car is and where it is going
- Creating an awareness of the vehicle's surroundings
- Deciding how to react to those surroundings
- Driving the car in accordance with those decisions
The first and the fourth elements of autonomous vehicles are relatively straightforward and simple. GPS technology has gotten good enough to measure a car's position to within a few feet, and anyone who has used Google Maps or Waze knows that setting a driving route from one place to another is easy. Meanwhile, programming a computer to perform the mechanical tasks of operating a vehicle, like applying the brakes or turning the steering wheel has also been done. Lots of cars are now equipped with semi-autonomous features that let a computer automatically park a car, maintain a speed with cruise control, or keep a car in a lane through an “autopilot” mechanism.
It is the second and third elements that have been the most challenging for the development of autonomous vehicles. Outfitting a vehicle with sensors that detect everything that could impact or imperil a car ride has not been easy. Relaying all of that information to a computer that ignores irrelevant information and only focuses on what should impact the drive has been trickier. Determining how best to react to that information is where it is most likely for autonomous vehicles to fail.
The Legal Headaches from Autonomous Vehicles in Baltimore
While autonomous vehicles are far from the norm in Baltimore, the legal issues that they are bound to create once they become popular are already becoming apparent. Just the semi-autonomous features on some of the recent car models have created some interesting legal questions that Maryland's personal injury law is not prepared to solve. For example: if a driver today backs into a parallel parking spot using the semi-autonomous parking feature on their car, and end up backing into someone and hurting them, is the driver at fault?
Questions about who was ultimately responsible for a crash suddenly become much fuzzier when the vehicle can drive itself.
A Much Trickier Discovery Process
Determining who was at fault, of course, begins with figuring out what went wrong. When humans are driving the cars that collide, this is already difficult enough – complex accidents can be reconstructed using tire marks, photos, eyewitness accounts, and vehicle damage, but there is always some uncertainty to the process. With autonomous vehicles that use reams of data to make even the smallest driving decision, tracing what went wrong is going to involve examining computer code and comparing what happened to what the vehicle's programming should have done.
With so many programs and features involved in every aspect of what an autonomous vehicle does, pinpointing exactly where the mistake happened can be incredibly difficult. However, it is also an essential part of a personal injury claim stemming from an autonomous vehicle crash, as it would determine who was ultimately at fault for the accident and how should be held liable for the costs associated with it, including your injuries and losses.
Numerous Possible Defendants
Finding exactly what went wrong is essential because there are so many people and companies behind autonomous vehicles. In theory, at least, each company could claim that another one was responsible for the crash until it had been clearly shown that what went wrong and caused the accident was their own responsibility. This makes it difficult for a plaintiff and victim of an autonomous vehicle crash because there can be a dozen or more companies involved in the operations of each self-driving car, like:
- The company that manufactured the vehicle, like Tesla, Toyota, or Honda
- The company that designed the sensors that detect the car's surroundings
- The company or person responsible for correcting installing and maintaining those sensors
- The company, like Google's Waymo or Uber's self-driving car division, responsible for designing and honing the computer programs that distill the information gathered and then uses that information to make driving decisions
The number of possible culprits for a crash involving an autonomous vehicle can make a personal injury claim for compensation more difficult.
The computer programs that are the fundamental components in autonomous vehicles have lots of decisions to make. Among them is whether to prioritize the safety of the passenger in the vehicle over the safety of others, outside of it. While this sounds theoretical, the real-world applications are inevitable, once autonomous vehicles hit the roads of Baltimore: should autonomous vehicles be programmed to protect its own passengers, even if it means putting others at risk?
Imagine a situation where high winds knock over a tree. The tree is falling across a road, on which are two autonomous vehicles. The sensors on both vehicles detect the tree and determine the risks associated with it. Car A determines that there is a 95% chance that its passenger dies if evasive action is not taken. However, the only move that Car A can make would result in a crash with Car B. That crash comes with an 80% chance that both passengers are seriously hurt.
No matter what happens, there will be an innocent victim: if Car A avoids the crash, its passenger will likely die from the falling tree. But if Car A saves its passenger, then Car B's passenger will likely get hurt. How the autonomous vehicle is programmed will determine who suffers.
Autonomous Vehicles that Have Been Hacked
A significant legal issue with autonomous vehicles is how to handle crashes that were the result of a vehicle getting hacked because, while the hacker is ultimately the party responsible, autonomous vehicles should be expected to be insulated from hacking attempts to keep their passengers safe.
Complicating matters is the fact that hacking an autonomous vehicle would likely be considered a crime – if it is not against the law yet it will be once autonomous vehicles hit the mainstream. Few insurance companies will cover the costs of an intentional act or criminal act by one of their insured customers. Therefore, even if law enforcement were somehow able to track down who hacked the autonomous vehicle and made it crash and bring them to justice, odds are that the hacker's insurance would refuse to pay for the costs of the accident they caused. This would leave innocent victims looking to the hacker to pay for their losses out of the hacker's own pockets – something that is extremely unlikely to happen.
However, it also seems reasonable to expect autonomous vehicle makers to take steps to protect their vehicles from hacking operations. While no cybersecurity defense is going to be perfect, they can be made to prevent all but the most skilled or novel hacking methods. In theory, then, hacking attempts that are simple and easy to defend should be thwarted by an autonomous vehicle's cybersecurity defense. If it is not, victims who should have been able to depend on anti-hacking defenses should be able to recover compensation from the autonomous vehicle makers or cybersecurity defense teams. However, the law has not had to deal with the problem, yet.
A Potential Solution: The Legal Concept of Enterprise Liability
Most of these legal problems that come with autonomous vehicles have to do with the uphill struggle that victims and plaintiffs face when they get hurt and seek compensation. They are faced with incredibly nuanced technologies that they cannot understand and that are likely protected by patents as trade secrets, but still have to show, by a preponderance of the evidence, that a particular someone should be held liable for the crash and pay for their injuries.
Unless laws are passed to clarify the issue, the legal concept of enterprise liability is a possible solution. Enterprise liability allows a plaintiff and victim to recover compensation without identifying precisely which defendant caused their injuries, so long as the plaintiff has shown that the wrongdoer was one of the small handfuls of defendants in the case. Under enterprise liability, all of the defendants that could have been responsible for the autonomous vehicle crash are held jointly and severally liable, allowing the plaintiff to recover the compensation they need from any one of them. Once the victim has been compensated, the defendants can then sue each other to establish who was liable for the incident.
Compensation for Victims of an Autonomous Vehicle Accident
Victims of an autonomous vehicle accident in Baltimore deserve to be compensated for their losses. This is, after all, the main point of personal injury law – to give innocent victims the financial means to overcome the costs of their recovery and to make them as whole as possible after the setback of the accident.
This compensation comes in the following forms in Baltimore:
- Medical expenses. Victims deserve to recoup the costs of the medical care that they have received, and they are likely to need in the future, and that is associated with the injuries they sustained in the accident.
- Lost wages and professional setbacks. Victims also stand to lose out on the income that they would have received, were it not for the accident. This can come in the form of lost wages, lost business opportunities, and even the reduced ability to earn a living due to the injuries they have suffered.
- Pain and suffering. While it is tricky to put a dollar amount on the physical pain that victims have felt from their injuries or the mental suffering and anguish that has come from the accident, that does not mean that victims should be forced to deal with their troubles uncompensated.
Gilman & Bedigian: Autonomous Vehicle Accident Lawyers in Baltimore
The personal injury lawyers at Gilman & Bedigian in Baltimore foresee the likelihood that autonomous vehicles will transform how people get from one place to another in the very near future. They also see how things can go wrong and how people can get hurt, and understand how personal injury law will struggle to answer some of the most pressing questions that can come up after an autonomous vehicle crash.
With their legal representation, victims of autonomous vehicle accidents in Baltimore can invoke their rights and fight for the compensation that they need and deserve. Contact them online to schedule a consultation to understand your rights and how your case will likely move forward in this developing and unsettled area of the law.