Month Day Year
Weekday
Technology

Autonomous Driving and Countering Human Nature

Autonomous driving is on the rise everywhere. What could it mean for the future and the role it holds on human nature?
Autonomous Driving and Countering Human Nature

The history of self-driving vehicles

The concept of self-driving cars is not one that dates from the 21st century. Believe it or not, it was first introduced in 1939, during an exhibit by General Motors. It was subsequently improved by Japanese engineers in 1977, who used a computer to process movement, obstructions, and other elements that the car could detect in front and in the surroundings of the vehicle. This new technology enabled autonomous cars to drive at speeds of approximately 20 miles per hour, due to the computers that were incapable of processing outside information quickly enough.

However, until the mid-2010s, the possibility of seeing autonomous driving hit the market was dismissed by the general population. It was considered a technology similar to that of flying cars – an amusing thought, but too scientifically challenging for a project in the near future. But, alongside the artificial intelligence boom in the past decade, auto developers began funding projects on semi-auto or fully automated driving technology, planning to be market-ready in the 2020s. Examples of this are Apple’s Project Titan, Google’s partnership with Toyota to form Waymo and General Motors’ subdivision Cruise.

The danger of existing automated driving technology

However, long before these developments officially began, the question about the practical reliability of self-driving cars has often been raised. Many car companies have integrated automated “protective” technology in the last few years such as Automated Emergency Braking, also known as AEB, used and advertised by Audi, General Motors, Ford, BMW, and many more. While this feature may be practical and quasi-necessary, there are rare but still theoretically possible scenarios in which automated braking could be a pathway to disaster.

Although it is listed as a preventive measure in case of vehicle or human presence in front of the car, a sudden automated brake on a high-speed roadway to avoid events such as an animal crossing, could cause extensive damage to cars behind that have insufficient reaction time or anticipation post-AEB. Of course, one crash could be prevented, but a last-minute brake causing a slight crash with any unforeseen obstruction is much less damageable than a full-on collision between one vehicle and several, not only one, behind it.

To whom is assigned the blame?

The latter scenario also raises a much more serious topic surrounding algorithm-induced accidents. The essential question here is: who is to blame? Self-driving cars, and even semi-automated systems today, are engineered to protect the vehicle and its passengers.

An article published by The Atlantic states an optimistic perspective from which it estimates a 90% decrease in automobile accidents, but the other 10% still remain, and this is an important factor. Since drivers might no longer be responsible for autonomous driving tragedies, who is? There will inevitably be a scenario in which a family or a close one tries to sue or obtain reparations through the passing of the deceased or the injured (due to a supposed algorithm-determined event). In that case, how will trial arguments be shaped? The character and general physical capability of the plaintiff or the victim will most likely be irrelevant, as will those of the other vehicle or person involved in the accident. This is where the company having built the automated driving technology comes into the play. Many assume that the designers behind the "driver first" aspect of autonomous driving should hold accountability.

One could argue that they should be freed of all criminal liability because of the general reduction of automobile fatalities, and because of the fact that self-driving is listed as an optional feature, not removing the driver entirely from awareness-related responsibilities on the road. Then again, one could argue that while these companies generally reduce car fatalities, it is impossible to know whether the accident would’ve taken place had the traditional driving system been kept in place.

Countering human decision-making

Consequently, the issue of mathematically determined decision-making replacing standard human choice and accountability is brought to the table. While statistically, some will argue for the inclusivity of decision-making AI given the utilitarian argument of reducing automobile accidents, others will state the importance of restricting such power to consumers so as to maintain the ability of assigning accountability in a decentralized form rather than the centralization of power in developers and executives of large companies responsible for data and algorithm-driven decisions.

Through the technological boom in the last decade, it has become clear that society is progressing towards one that will be centralized around the use of artificial intelligence in our daily lives. While the rise of AI has indisputably offered and will not cease to produce economic, scientific and social benefits, many experts acknowledge the ethical gray area in which the future presence of AI must be thoroughly questioned and appropriately regulated to avoid the unexpected issues to come.

Credits

Photo by Bram Van Oost

Continue Reading