Anda di halaman 1dari 16

Software Safety in Autonomous Cars: Emerging

Research and Technology


Andrew Berry
University of Texas at Dallas
alb151030@utdallas.edu

Cindy Frias
University of Puerto Rico at Rio Piedras
cindy.frias@upr.edu

Sergio Rosales
University of Texas at El Paso
srosales4@miners.utep.edu

Abstract
Safety in cars is a primary concern of both consumers and manufacturers in the automotive
industry since driving has always been a safety critical operation. Human-error in driving accounts for
the vast majority of automobile crashes. Also taking into account the increasing capability of
autonomous technology, companies in the automobile industry are growing interested in developing
self-driving or robotic cars. Although there are many factors in ensuring safety in autonomous cars, in
this paper we primarily focus on three aspects: (1) real-time environment onboard representation, (2)
automotive user interfaces, and (3) decision-making models. Specifically, we include a safety
assessment of the environmental interaction of the autonomous vehicle and discuss emerging
technologies for roadmapping. A survey of the advances in AutomotiveUI is also conducted to show
how current research is addressing the safety issues exposed from incorrectly designed interactions
between operators and machines. Moreover, challenges, considerations, and possible solutions for
coordinated and navigational decision-making models are discussed in our research as well. Findings
of our work help reveal areas of opportunity where System Safety can progress Autonomous Car
research.
Introduction
Cars are everywhere and important in our life ever since Nicolas-Joseph Cugnot built the first
automobile in 1768. The use of cars is the main method of transportation in the world. In some places
it is essential to have a vehicle to get to work, school or pre-determined destinations. For some people
a car represents liberty of movement and for some its work.
However, car accidents are very significant and serious. The problem is the amount of bad
drives or intoxicated people driving after a reunion with friends. Worldwide, the number of people
killed in road traffic each year is estimated at almost 1.3 million and the number of injured is
estimated 50 million [4, 8]. The Association for safe international road travel (ASIRT) believes that
government plays a crucial role in global road safety in terms of policy, enforcement and
infrastructure. While the road safety situation in the United States is dramatic with nearly 40,000
people killed in road traffic each year, American travelers remain especially vulnerable to global road
safety conditions [4].
In 2009 Stanford University created a driverless car that could go from one place to another.
After this many companies such as Google, Tesla, Volvo, etc., starting the development of semiautonomous and autonomous cars. These companies saw the benefits of autonomous vehicles provide
a safe transportation and the liberty of movement for everyone. People spend a lot of time just going
from a place to another just sitting in their cars. Time that people could use to be more productive or
have a qualitative moment with their families and friends. On the other hand, there is people with
disabilities that lose even more time trying to get a designated place. Furthermore, fully autonomous
cars would be helpful for the transportation home of intoxicated passengers, either with alcohol or
other substances. Thats why it is important to study the software safety in autonomous cars and the
emerging technology.

Safety in Current Autonomous Models


The claim that autonomous vehicles will lead to safer roadways has been supported by a small
body of emerging data. Statistics on autonomous driving are limited due to the lack of testing-ready,
self-driving vehicles. Further hampering research, lawmakers in many states are hesitant to allow
autonomous vehicles on public roadways due to safety concerns about the level of development in
these cars [3]. The importance of testing these cars on public roads cannot be understated. Simulated
roadways lack the unpredictability of real world scenarios. For example, in a 2015 TED talk, head of
Googles driverless car program, Chris Urmson, humorously described how the Google Car was faced
with a situation of a woman in a wheelchair chasing a duck through the road [6]. Needless to say, such
scenarios are rarely anticipated in simulated environments.
The variety of autonomous cars on the road today is extremely limited. Perhaps the most
prominent example, the Google Car, is the most high-tech of those that operate on public roads.
According to a May 2015 safety report, the Google Car had logged 1,011,338 miles in autonomous
mode without a collision that was the fault of the Google Car [1]. In comparison, an average U.S.
driver will have an accident every 165,000 miles [30]. The Google Car had its first crash in which the
robotic car was at fault in February 2016, in which it crashed into a bus due to a decision making
failure in changing lanes [10] (see Section 3 on decision-making for potential solutions to these
problems). Despite this, the Google Car has proven safer than a human driver in limited scenarios.
Another high profile project into automation comes from Elon Musks car company, Tesla.
Tesla models, already unique due to a total lack of dependence on gasoline, now come equipped with
an autopilot feature. This feature allows the car to drive autonomously in highway scenarios. Use of
Teslas Autopilot is limited exclusively to highways. In May 2016, the first fatal accident in which
autopilot was involved occurred in a Model S [5]. This was after 130 million miles collectively logged
by autopilot without a fatality. Relative to this, the average distance per fatal crash among human
drivers in the US is about 94 million miles [5]. On the face of it, this data may support the safety of
robotic driving over human driving. However, critics find the comparison misleading because the
limited scenarios in which Autopilot is used already inherently pose less risk to safety [24].
While some of the early statistics on Autonomous cars have been inconclusive, most industry
experts anticipate that, in the long run, mature, autonomous technology will be a safer alternative to
human drivers. Key areas of disagreement remain over the timescale and level of testing needed
before self-driving cars gain widespread use. Current projects by Google and Tesla are merely a
microcosm of what is to come.
Terms and Concepts
Failure is the nonperformance or inability of the system or component to perform its
intended function for a specified time under specified environmental conditions[21]. A system can
fail by not satisfying the goal of the system, this is called systemic failure. A system can also fail by
not following the original design; if the system deviates from the originally designed behavior this
would be considered a failure.
Because both faults and failures can be considered behaviors that are not intended nor desired,
a system leads to a failure or a fault because of errors in the system. And an error is a design flaw or
deviation from a desired or intended state[21]. Following this line of thinking - errors, which are
static, lead to failures or faults, which are incorrect system behaviors.
When discussing how software contributes to system failures it is important to note that
software-related computer failures are always systemic[21]. When it is said that software is at the
root of a failure, what is meant is that the causes of the failure are due to design flaws. Minimizing
the amount of design flaws in a system leads to an increased sense of system reliability, which

Leveson defines as the probability that a piece of equipment or component will perform its intended
function satisfactorily for a prescribed time and under stipulated environmental conditions[21].
While there is an effort to describe reliability as a function of probability, the discussion of
reliability is enveloped with psychological issues of trust. The topic of reliability touches on many
human factors of trust and beliefs, dispositions, and intentions[11]. Avoiding these debates we will
lightly cover the issue of trust.
Trust as defined by Brian Mekdeci is the firm belief in the reliability, truth, or ability of
someone or something. Although reliability can be used to measure trust, trusting is not relying.
The topic becomes even more complex in a social context because trust means that other people
control outcomes that we value [11]. In regards to this research trust means that autonomous cars
have control over the something valuable to their operators and passengers, namely their lives.

1. Environmental Interaction
The environment can be classified by static and dynamic. Urban environments are consider
dynamic because is changing all the time. People walking, on bicycles, motorcycles, other vehicles
(big ones and small ones), etc. are the type of things that make this dynamic. The focus in dynamic
environments are more in identifying objects big and small, tracking the movements and the dynamics
of traffic participants along the corresponding paths on the lanes is considered. Rural environment can
be both dynamic and static. If is static is more about the road mapping and location. Both the dynamic
and the static have common challenges like:
1.
2.
3.
4.
5.
6.

Changes of weather
Water bodies
Identification of small objects
Identification of location
Prevention of crashes
Traffic

When the planned trajectories of both vehicles are known, the situation can be classified as
safe. But since we dont know the trajectories this everyday situation has to be classified as unsafe.
With autonomous cars each vehicle would know the planned trajectories of other vehicles and could
check deterministically if a crash is possible, but this trajectory is unknown for a common vehicle.
According to [8]:
1. These trajectories can be clustered, resulting in motion primitives such as left turn, lane
change, or parking.
2. The lanes most likely followed by traffic participants are determined by high-level
behaviors.
3. The computations are based on the Markov chain abstraction.
1.1 Markov Chains and Monte Carlo Simulation
Markov chains allow the position distribution of other traffic participants to be predicted.
During the operation of the autonomous vehicle, the stochastic reachable sets of the Markov chains
can be computed efficiently. From [8] we know that the abstraction to Markov chains is only applied
to other traffic participants, while the behavior of the ego vehicle is known from the trajectory
planner. Monte Carlo simulation produce a more accurate transition probabilities while reachability
analysis produce a complete abstraction. [8] Determinate that, The term Monte Carlo simulation

refers to methods that are based on random sampling and numerical simulation. And Popular for
complex and highly coupled problems where other probabilistic approaches fail. When computing
crash probabilities, the Monte Carlo approach clearly returns better results since it does not suffer
from the discretization of the state space.
1.2 Violation of Traffic Regulations
There are some strategies for considering violations of traffic regulations. The approaches that
[8] consider are:
1. ...assume that the traffic participants respects the traffic rules... this means that the driver
respect all the traffic rules like: Changing lanes, speed limits, drove throw correct lanes, etc.
2. ...assume that a non-conform driver is a reckless driver, the speed limit assumption, the
assumption that this driver stays in allowed lanes, etc., is not considered anymore in the
prediction, but this would be to radical.
3. ...only disable the speed limit regulation if only the speed limit is violated or only disable the
allowed lanes assumption if this assumption is violated, etc. and this one is more challenging
because you have to first identify the violation.
1.3 Motion Patterns
Each motion pattern is represented by a chain of Gaussian distributions, and statistical
descriptions of typical motion patterns are then formed. According to [17], Given a set of object
trajectories, we can automatically construct statistical motion patterns, which are further used to detect
anomalies and predict object future behaviors. After motion patterns are obtained, we can predict the
future trajectory along which an object is going to move according to the current partially observed
trajectory.
1.4 Moving Objects
A moving object is always associated with a cluster of pixels in the feature space and the features of
each cluster change only slightly between consecutive frames. On the basis of background subtraction
and feature extraction, a fast accurate algorithm is used to cluster foreground pixels. Each cluster
centroid corresponds to a moving object or a part of a moving object that the sensors could catch.
Every probability distribution in a motion pattern corresponds to more than one point feature vectors
in a trajectory. The number of motion patterns is estimated rather than assumed. Abnormal
trajectories are allowed to exist in the sample trajectories. This makes the motion pattern more
flexible to adapt the object position alteration along the moving direction.
1.5 Crash Probability
The crash probability is computed independently of previous crash probability. Each
possibility of a crash is considered under the condition that no crash has happened yet. According to
[8], Where a crash is possible, the probability of the crash is computed from the probability
distribution within the reachable set. One of the examples that they have is the prevention of
tunneling effect which occurs when two vehicles cross each other at high velocity. In this case, the
stochastic reachable sets of points in time might not intersect, while the stochastic reachable sets of
time intervals intersect. The disadvantage of the computation with time intervals is that the
uncertainty is greater than for points in time. This can lead to wrong crash probabilities as the
following example shows. And then it says that ...the crash probability of the autonomous vehicle is

approximated based on the stochastic reachable sets of other traffic participants and the planned
trajectory of the autonomous vehicle. And can take action when unexpected event happen.
1.6 Road Mapping
In order for an autonomous robot (in this case a vehicle) to follow a road, it needs to know
where the road is. To stay in a specific lane, it needs to know where the lane is. For an autonomous
robot to stay in a lane, the localization requirements are in the order of decimeters. Taking this in
consideration the autonomous vehicle needs to know where it is, how that environment is, what is on
it and where and how it supposed to move (lanes). According to [22], Yet such accuracy cannot be
achieved reliably with GPS-based inertial guidance systems, specifically in urban settings.
1.6.1 The Global Positioning System
The Global Positioning System (GPS) is fully operational and provides accurate, continuous,
worldwide, three- dimensional position and velocity information to users. GPS also disseminates a
form of Coordinated Universal Time (UTC). This network also uploads navigation and other data to
the 24 satellites that make GPS possible. According to [19], GPS can provide service to an unlimited
number of users since the user receivers operative passively. For the map representation GPS is
subject to systematic noise, because GPS is affected through atmospheric properties. For the vehicle
its assume noise independent.
For the use of the autonomous vehicle is need it more than this. According to [6], Both the
mapping and localization processes are robust to dynamic and hilly environments. Since urban
environment are dynamic, and for localization for the vehicle to succeed has to distinguish static
aspects of the world relative to dynamic aspects. The distinguish between static and dynamic aspects
are helpful to the mapping and knowing where you can and have to move process to get from point A
to B. For this matter autonomous vehicles can use the Simultaneous Localization And Mapping
(SLAM) that is ...methods are employed to bring the map into alignment at intersections and other
regions of self-overlap [22], And the LIght Detection And Ranging (LIDAR) that is ... a remote
sensing method used to examine the surface of the Earth[7].
1.6.2 Simultaneous Localization And Mapping
With an unknown environment, start at a location with known coordinates. While the vehicle
start moving in the unknown environment the SLAM system seeks to acquire a map thereof, and at
the same time keep locating its position. The map is acquired by locating the static objects in the
environment. An example of how SLAM looks like from a motorcycle verify [13].

In the top of the image you can see how we see the street and in the bottom how SLAM see the same
street and start a map [13].

1.6.3 Light Detection And Ranging


LIDAR use lasers to measure the height of the floors, forests and buildings. In urban
environment can capture vehicles, people and in occasions the lane marks on the streets. LIDAR have
three ways to recompile data: from the floor, an airplane or with satellites out in space. La data take
from the Airborne Lidar is free to the public in the National Ecological Observatory Network
(NEON). From the Electromagnetic Spectrum the colors used are green light (532 nm) or nearinfrared light (1064 nm), because they reflect strongly over vegetation. Its use GPS system to track
the position x, y and z of where the laser is pointing. Inertial Measurement Unit (IMU) is used to track
the position of the LIDAR collector of data in this case the vehicle. Finally LIDAR need a computer
that keep all the data reported and is going to be in the vehicle as well. For more information about
how LIDAR works from an airborne check [2], and for an example of how the urban areas looks like
with LIDAR [22]. Another fact about LIDAR is that can do the mapping even at night.

Vehicle perspective view of an urban environment with LIDAR from [15].

2. User Interfaces in Autonomous Cars


2.1 Background of Human Error and Relevance of AutomotiveUI
Current research in the AutomotiveUI field is concerned with delegating tasks of increasing
skill levels, which will require learning algorithms of higher abstraction levels. Although
AutomotiveUI draws on many topics and fields, one of the major focuses in AutomotiveUI is safety.
Safety is achieved by creating Human Machine Interfaces (HMIs) that account for human error
mismatch on three levels. Skill-based behavior mismatch errors occur where operators might make a
mistake in sensor-motor performance. On the next level, rule based behavior, users might introduce
misapplied or inappropriate heuristics to similar chains of events. The third, and highest human-error
mismatch, is knowledge based behavior. These errors occur due to encountering unknown or
unexpected scenarios, and because of this uncharted quality, control of performance requires
conceptual problem solving [21].

figure A [21]
AutomotiveUI seeks to address the issue of a strong need for efficient and clear
communication between the operator and the car at higher levels of autonomy.
A 6 year survey of AutomotiveUI conferences shows the majority of papers published since
2009 have been concerned with infotainment and general driving [20]. Trends show attention
among the AutomotiveUI community is shifting towards improving the act of driving safely. This
focus is demonstrated in terms of research into advanced driver assistance system (ADAS) technology
in cars. Research has increased safety with features such as cruise control, parallel parking, fully
automated parking, lane switching, etc. The number of AutomotiveUI publications about this type of
technology jumped from 7% in 2014 to 32% in 2015 [20].
2.2 Autonomous Cars as Agents and Levels of Autonomy
At this point is it important to describe an autonomous car as an agent, a term used in the field of
Agent-Oriented Software Engineering. Davidsson and Johansson offer a descriptive definition of
agent, as a self-contained entity that has a state, which is situated (able to perceive and act) in an
environment, rational, and at least reactively autonomous [9].
There is a sense of coordination and interdependency between human and machine at each of the three
levels of Human-Task mismatch. Where the machine is an extension of the user - at the lower levels
having little if any degree of autonomy and at the high level having complete autonomy. These have
been categorized into three levels of human-machine tasks mismatches but The National Highway
Traffic Safety Administration separates levels of autonomy into four levels. Level 0, where the car
has no autonomy. Level 1, where the driver can disengage from either the pedals or the steering
wheel. Level 2, drivers can disengage from both the pedals and the steering wheel. Level 3, the
autonomous vehicle will be able to drive unassisted for longer periods of time and alert the driver
when re-engagement is necessary. And level 4, where the car can truly be considered autonomous
[20].

table A [20]
Because autonomous agents are thought of as self-thinking entities at varying degrees of self
awareness it is interesting to note the qualities that agents have been described by the the Artificial
Intelligence Community. Agents are judged on the ability to perceive, the ability to act, their level of
autonomy - being able to make decisions, and their ability to communicate. From these four criteria,
Davidsson and Johansson identified four agent types that are described based on their autonomy and
communication capabilities: Reactive, Plan Autonomous, Goal Autonomous, and Norm
Autonomous.

figure B [9]
There is an interesting connection between table A and figure B, the relationship becomes apparent in
table B. The description of what the driver can do depending on the autonomy of the car lends itself
to the discipline of designing safety into the HMI of an autonomous car. Cars of higher autonomy
will have to become increasingly more communicative to interact with ever more disengaged drivers.
Taking humans out of the picture requires autonomous cars to be super connected to systems and
networks to create an enormous corpus of knowledge that the onboard AI will need. As McBride puts
it in The Ethics of Driverless Cars, Every type of connection should be pursued to compensate for
the loss of human connection [25].

table B
Another pattern is apparent from table B. Although the factor of communication is left out,
charting the data of the two figures side by side one can see that as autonomy increases so does the
agents ability to act. And on closer inspection the classifications follow an upward trajectory to full
autonomy through the four criteria Davidsson and Johansson have identified. The classifications
progress from the lower level of perception, in the form of choosing from a preselected set of steps
towards a goal. The trajectory continues on to the higher level of perception, in terms of creating on
its own the steps to take in order to achieve a certain goal. The trajectory finally reaches decision
making, based on self awareness.
2.3 Trust, Control and Imperfect Actors
Because of the interdependencies between tasks, the task setting, and the actors, there needs
to be improvements in the design of computational social structures, i.e., how one person might
interact with an agent, or how one person might interact with multiple agents, etc. [11]. These focused
efforts with re-enforce trust when humans and machines interact.
The issue of trust cannot be tackled head on solely by the efforts of software engineering
research. Trust is only one of the results of designing safety into a system. While trust is definitely a
major concern with complex safety critical machines that are autonomous.
The DARPA Grand Challenge continually shows room for increased collaboration between
the fields of sensor technology, machine perception, control engineering, artificial intelligence, driver
psychology, human machine interaction, market acceptance, and legal and liability [27]. And further
research will most likely extend the reach of autonomous cars to include other fields. Indeed,
autonomous cars have proven to be more interdisciplinary than expected. I would suggest describing
the field of autonomous cars poly-disciplinary. This makes system safety in autonomous cars that
much more complex. Software engineers should be expected to design systems and subsystems that
operate with more abstract physiological human inputs,i.e., hand gestures and facial expressions, to
create an agents belief base which contributes to building a complete context [11].
Advanced Driver Assistance Systems (ADAS) are already heavily interacting with the driver.
Because the current systems embedded in todays cars are ADA systems, their main purpose is
supporting the driver and the drivers safe driving technique. This supporting role requires the
transfer of large amounts of information as a result of human-machine interaction. Because cars, as a
system, are increasingly being operated through embedded autonomous systems. The question

becomes which system generated information is relevant to the driver during real-time scenarios? The
question that follows from the first is: And can the system be relied on to consistently deliver the
right information to act upon?
The University of Michigan Transportation Research Institute conducted a survey in 2015 of
505 participants and found the most frequent preference was for no self-driving (43.8%), followed by
partially self-driving (40.6%), with completely self-driving being the least preferred (15.6%)
[Schoettle]. It seems in the public eye that autonomous cars are too unpredictable to trust. This is
because of the correlation between predictability and expectation - predictability builds expectations
and in turn expectations enforce predictable behavior. If there is no be trust, or belief base in other
words, between two actors then expectations begin to break down [11].

figure C [11]
It is important not to underestimate the fact that environments will dictate the delicate balance
of control between the driver and the autonomous car. Given the possibility that after a system is
delivered all system boundaries will be put to the test by operators, possibly all at once. These
operators will be faced with situations where all their choices might be erroneous and will further
exacerbate a hazardous state. The onus, then, is on designing user interfaces that can be trusted to
reliably de-escalate hazardous states when both the autonomous car and the operator are in danger.
Software Engineers, then, are entrusted to design a system, so that success is not dependant on every
decision being a good one [12].
Another major distinction is the requests for information to be given or taken between the
driver, the system and the subsystems, should be considered more like transactions.
While the majority of research is going into outward looking sensors and cameras there
should be comparable effort towards providing the same level of awareness of passengers within the
car. A complete human-autonomous social ecosystem [11] demands not only that an autonomous
car recognize and assess other agents but that the autonomous car also build an internal context. A
complete with models for actions of actors in the driver's-seat ,as well as models for interactions
between passenger actors.
Do humans Cause Accidents? [21] A more revealing question would be how do humans
cause accidents? The answer is not simple but if we switch from thinking of car crashes as
accidents and start to think of car crashes as system failures we can start to move towards an
answer. Thus car crashes come about by designing faulty equipment, by designing poor systems, by
instituting incorrect procedures, by incorrectly operating machines.

Finally, crashes are a probable outcome of driving. Autonomous cars are complex yet they do
not drive themselves at the moment. Autonomous agents cannot yet reproduce the human act of
driving - cars operate, humans drive. Occasionally, humans get into car crashes. Specifically,
humans who are not properly supported nor prepped to re-engage in driving get into car crashes.
The factors leading up to an accident are not necessarily the result of human error.
Autonomous cars are such complex systems that the amount of blame to place on either humans or
machines is not entirely understood at the moment. In fact, the unknown root cause(s) of an accident
expose an error in autonomous car Human-Machine-Interaction (HMI) models.
Given the possibility that after a system is delivered all system boundaries will be put to the
test by operators. These operators will be faced with situations where all their choices might be
erroneous and will further exacerbate a hazardous state. The onus, then, is on designing user
interfaces that can be trusted to reliably de-escalate hazardous states when both the autonomous car
and the operator are in danger.

3. Decision Making
One feature crucial for safety in autonomous cars is the ability to make complex decisions.
General mapping, sensing, and route planning may be sufficient for straightforward highway and road
traffic scenarios. However, situations arise in real-world scenarios that require a car of high-level
automation to choose between potential future states to coordinate right-of-way decisions, avoid
collisions, and even minimize damage when a collision is inevitable. The capability of autonomous
cars to navigate these situations successfully will play a vital role in gaining consumer confidence and
the trust of the general public.
3.1. Challenges
Certain obstacles must be accounted for and overcome in developing capable decisionmaking software. Foremost among these is the problem of interaction with non-autonomous cars.
Human behavior is far less predictable than machine behavior and carries greater risk when it comes
to coordinating actions between multiple vehicles. This is no mere ephemeral problem as research
indicates that with all else being equal, even if every automobile sold in the U.S. over the next 30
years was fully autonomous, this would only result in 90% of cars being replaced [14]. While we may
reasonably look forward to a future in which all automotive transport is autonomous, there is no side
stepping the uncertainties that arise from human drivers sharing a roadway with driverless cars.
Safety in non-highway traffic hinges on sound decision making. Autonomous cars at
intersections without traffic lights or with malfunctioning traffic lights must coordinate right-of-way
actions. These actions become less straightforward when you add human drivers who traditionally
have been able to communicate their actions with each other through hand gestures and may not
strictly adhere to right-of-way decisions that rely on precise timing of arrival at the intersection. The
introduction of these situations goes beyond simple algorithms for lane following, sensing, and basic
maneuvers [29].
The most important decisions a driverless car will have to make are perhaps the most
complex. There is no way to prevent other cars or environmental factors from dangerously disrupting
an autonomous cars path. Automatic braking systems are becoming more common and have
prevented a number of accidents. Autonomous cars have already, and will continue, to make use of
this technology to increase safety [18]. However, braking is sometimes insufficient for preventing an
accident and in some situations can potentially exacerbate one. This raises the challenge of
programming software capable of collision avoidance maneuvers. These maneuvers involve split
second judgments potentially involving a multitude of vehicles. The complexity of these situations
increases exponentially with each dynamic object involved. Programmers must find a way to navigate

these hazardous situations and minimize the risk to passengers of all cars involved as well as any
pedestrians or other travelers on the road.
Autonomous cars programmed for such life-or-death decisions will inevitably be treading into
ethically significant territory. In the event of an unavoidable collision, cars running on software that
can collect, interpret, and react to data in miniscule amounts of time will be far more adept at
discerning and bringing about different outcomes than a human driver acting purely on instincts
would. Even setting aside the eventual legal implications, programmers must take into account ethical
considerations to minimize damage in catastrophic scenarios.
3.2. Coordination and Navigational Decision-Making
One of the areas that has proved challenging for fully autonomous cars to handle is
interactions at intersections. Research into these vehicles has struggled to find effective ways of
coordinating traffic at these junctions. Any successful strategy for dealing with the coordination
problem must determine, at any given intersection, who is allowed to cross, and in which order.
Additionally, all conflicts in the relevant vehicles trajectory courses must be anticipated and solved.
Additionally, while not as critical, autonomous traffic should be coordinated in an efficient manner,
minimizing waiting times and enhancing traffic control [29]. Attempts to meet this challenge have
included the proposition that if a few simple traffic laws are programmed into these cars, the problem
would work itself out. However, in the Cybercars 2 simulation, it became apparent that this strategy is
ineffective as [29] even complete traffic rules and regulations leave cars susceptible to deadlocks,
blockades, and erroneous driving, as current rules are neither complete nor fully consistent.
A possible solution, as argued in [29], is for autonomous cars to generate hierarchical abstract
world models to manage the level of data utilized in these decisions. At lower levels of the vehicles
world model, data is collected through sensors on the physical environment around the car and
trajectories are planned. On this level, objects are detected and the car may brake or perform other
simple maneuvers to avoid collision. The higher level of this model would contain an interpretation,
not a mere representation of the environment. To illustrate the difference, a street sign would be
represented at the lower level as a geometric object, such as a pole, to be avoided. The higher level,
however, would hold the semantic meaning of the sign and interpret it appropriately (ie as a stop sign,
speed limit sign, etc.) [29].
The advantage of this strategy is that decisions could be made regarding lane changes and
intersections while leaving out any unnecessary data preventing needless complication [29]. These
actions would occur using only high level abstraction models. In fact, the only data needed to
complete these decisions would be information in three categories[29]:
1. Lane section- General information about the lane. This includes length of lane, successor
lanes (where an old lane joins a new lane), speed limit, etc.
2. Lane objects- The objects in a lane. Vehicles, pedestrians, cyclists, including the physical
dimensions of each would be represented.
3. Lane relations- Used to represent interdependencies between lanes. For example, open lanes
adjacent to the vehicles current lane would be labeled neighboring. Other relational labels
would include conflicting (two lanes merge with possible collision path for vehicles), nearby
(an adjacent lane that is closed but available in an emergency), and opposing (adjacent lane
for oncoming traffic but can be used to pass cars when empty). Right-of-way relations would
also be stored here.
With these three categories represented, decisions could be made following a few basic set of
rules [29]. For example, if a car is to make a move into a lane with a conflicting relationship, it
could check if a car (a lane object) is in the conflicting lane within a crucial distance and, if so, which
car has the right-of-way. The car with the right-of-way would continue on while the other car would

simply brake. Under this model, when a car approaches an intersection, it would merely sense it as a
conflicting relation between multiple lanes and handle it accordingly. This simplifies decisionmaking considerably into an efficient strategy that can solve the coordination problem.
The advantages of this strategy were apparent in their implementation in the Cybercars 2
simulation [29]. The strategy reduced updating time as most of the relevant information at the higher
levels of the world model needed only to be attained once. It offered flexibility and adjustability and
allowed for higher and lower abstraction information to be compared to ensure data consistency.
Further testing is needed, but preliminary results have been promising.

3.3. Avoidance Maneuvers and the Ethics of Collision Mitigation


Basic collision avoidance (CA) systems are being used increasingly by automotive
manufacturers. Primarily, implementation has been limited to automatic braking systems which can
perform emergency braking for inattentive drivers [18]. While this can reduce certain types of
accidents, many potential accidents require more sophisticated action to circumvent danger.
Autonomous CA systems for automobiles have a precedent in the aviation industry. Aircraft
have been using CA systems for years to assist pilots and air traffic control in maintaining a safe
separation between planes in airspace [18]. A method adopted by these systems is a type of Bayesian
inference known as Monte Carlo techniques [18]. This method quantifies the probability of collision
for each available maneuver a car can perform to avoid an accident. The system is designed to choose
from the space of possible actions, the one with the least risk of collision.
Due to the uncertainties of driving, this is not the end of the story. While the safest step is to
avoid collisions altogether, some collisions are unavoidable. In real world situations, not all factors
can be neatly controlled. Weather can create hazardous conditions on roads and wildlife will continue
to pose a threat to safe driving conditions [14]. To account for this, autonomous vehicles should
implement collision mitigation systems (CM) [16]. These systems allow vehicles to calculate the risk
associated with the anticipated outcome of each potential maneuver when a collision is inevitable. For
example, colliding with a trash can is a more favorable outcome than colliding with a brick wall in
terms of both personal safety and property damage. CM systems are still in early development with
regard to autonomous cars, but certain mathematical models have been proposed using crucial
measures such as time-to-react (TTR) in conjunction with detailed sensor detection to analyze the cost
of unique collision outcomes [16].
Ultimately, a successful CM system must be designed with ethics in mind. Autonomous cars
will be presented with situations in which a collision is not only inevitable, but the outcomes it is
faced with involve more than monetary cost. A plausible situation illustrating this is put forward in a
TED talk by Patrick Lin [28]. Lin describes a scenario in which a self-driving car is boxed in between
a motorcyclist, an SUV, and a large truck carrying unsecure cargo in the front. Were some of the
trucks heavy cargo to become loose and fall off the truck in the direction of the self-driving car, the
actions the car can take are limited. It can bear the brunt of the cargo, exposing the driver to danger, or
it can swerve into the cyclist or the SUV, with potentially fatal results to the respective passengers.
Predicaments of this kind are faced by human drivers all the time, but there are a few key
differences in the case of an autonomous vehicle. For one, manufacturers of these cars will bear more
legal responsibility than ever before due to the lack of a human driver to be held responsible. When
dealing with full autonomy, not only the design of the car, but also its actions are the result of the
company that developed the vehicle. Secondly, while human drivers act on pure instinct in split
second decision making and often do not have the time, information, or emotional capacity to make a
rational choice in such circumstances, self-driving cars are not hampered by many of these limitations

and will not be able to fall back on such excuses [23]. This puts an incredible weight on programmers
because in a significant sense they will be making these decisions for the unfortunate passengers, in
some cases, years ahead of time at the drawing board.
Incredibly difficult questions arise in this context. It may seem that the obvious best (or least
undesirable) outcome is the one in which the fewest are exposed to the least amount of harm, but even
this common sense principle has serious complications. A car programmed to act according to such
strict consequentialist principles may end up targeting safer vehicles to avoid crashing into less sturdy
models [23]. This effectively penalizes those who opt for safer vehicles, potentially canceling the
protection benefits and perhaps even counter-intuitively making such cars a more dangerous option on
the road. Putting a target on those who take safety precautions on the road seems to many as unethical
discrimination, illustrating how the life-or-death situations programmers will have to plan for evade
any simple answer.
While CM systems are still early in their development, computational moral modeling is in its
infancy [14]. McLaren, an automotive company, has developed software to aid with decision making.
One of these programs, SIROCCO, analyzes case studies and [14] [identifies] principles from the
National Society of Professional Engineers code of ethics relevant to an engineering ethics problem.
The U.S. army also has developed a computer model known as the Metric of Evil to assist in ethical
decision making in a battlefield environment [14]. Ethics are extremely difficult to boil down to
algorithms, but, nonetheless, progress has been made. Eventual acceptance of driverless cars will be
contingent on whether the public feels secure with such vital decisions being made by software.
Ethical clarity and foresight will be essential in the road forward.

Conclusion
Overall, as the automobile industry becomes increasingly autonomous, software safety will
need to be a prime area of focus. In this paper, we have highlighted some key areas in which more
research will need to be conducted.
Looking forward, environmental interaction is expected to have other methods that create a
clear vision of the environment for the car utilizing smaller equipment. An expectation for future
research is finding how to deal with some challenges that currently do not have a complete solution.
An example of this is the identification of traffic police vs traffic light and who to follow.
The AutomotiveUI field is moving as quickly as possible to take advantage of the
developments in onboard electronic equipment, such as the technology mentioned in Environment
Interaction. Real-time transfer of control is a major focus in the AutomotiveUI community to address
emerging data on increasingly disengaged drivers in autonomous cars. Studies and surveys show there
is a need for more acute models for human-machine interaction. The slow progress in AutomotiveUI
is also due to the complexity of artificial intelligence and its integration into already the stimulus-rich
environment within.
This paper discussed some the challenges that decision-making models in autonomous cars
must address. One promising area in this field is the resolution of decision-making data into an
abstract world model for efficiency. Also looked at was how cars can avoid collisions through
innovative mathematical models. Collision mitigation was also discussed as a way to minimize
damage in collisions. Lastly, this paper touched on some of the ethical issues collision mitigation
systems raise.
Autonomous cars are the future. While we understand the destination with some clarity, the
trajectory is far less certain. The most vital consideration on this journey is the safety of those

involved. Keeping this objective in mind, we must study the terrain ahead with the utmost diligence to
secure a brighter future.

References
1. Google Self-Driving Car Project Monthly Report (2015, May) [Online]. Available:
http://static.googleusercontent.com/media/www.g.u.00rz.com/en/us/selfdrivingcar/files/report
s/report-0515.pdf

2. How Does LiDAR Remote Sensing Work? Light Detection and Ranging,YouTube, 24-Nov2014. [Online]. Available: https://www.youtube.com/watch?v=eybhnsunidu. [Accessed: 12Jul-2016].
3. National Conference of State Legislatures. (2016, July 1) Autonomous | Self-Driving
Vehicles Regulation [Online]. Available: www.ncsl.org/research/transportation/autonomousvehicles-legislation.aspx
4. Road Crash Statistics, Road Crash Statistics. [Online]. Available:
http://asirt.org/initiatives/informing-road-users/road-safety-facts/road-crash-statistics.
[Accessed: 16-Jun-2016].
5. The Tesla Team. (2016, June 30) A Tragic Loss [Online] Available:
https://www.tesla.com/blog/tragic-loss
6. T. E. D. D., Chris Urmson: How a driverless car sees the road, YouTube, 26-Jun-2015.
[Online]. Available: https://www.youtube.com/watch?v=tiwvmrtluwg. [Accessed: 16-Jun2016].
7. What is LIDAR?, What is LIDAR?, 29-May-2015. [Online]. Available:
http://oceanservice.noaa.gov/facts/lidar.html. [Accessed: 06-Jul-2016].
8. M. Althoff, Reachability analysis and its application to the safety assessment of autonomous
cars. 2010.
9. P. Davidsson, S. J. Johansson, On the Metaphysics of Agents, AAMAS 2005, 25-29 July,
2005, Utrecht, Netherlands
10. A. Davies. (2016, February 29) Googles Self-Driving Car Caused Its First Crash [Online].
Available: https://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/
11. P. Dortmans, Trusting Technology in a Complex World, Science and Technology for
Safeguarding Australia, (DSTO), July 2015
12. S. J. Dubner, What are Gender Barriers Made Of?, podcast, 20 July, 2016, 11:00pm;
http://freakonomics.com/podcast/gender-barriers/
13. M. Ernst and S. Scheidegger, Monocular Simultaneous Localisation and Mapping for Road
Vehicles, Monocular Simultaneous Localisation and Mapping for Road Vehicles, 08-Jun2015. [Online]. Available: https://youtu.be/0raz4ecf8fy. [Accessed: 12-Jul-2016].
14. N. J. Goodall. (2014, June 8) Machine Ethics and Automated Vehicles [Online]. Available:
http://link.springer.com/chapter/10.1007/978-3-319-05990-7_9
15. S. Haj-Assaad, What You Need To Know About Autonomous Vehicles AutoGuide.com
News, AutoGuidecom News What You Need To Know About Autonomous Vehicles
Comments, 15-Jul-2014. [Online]. Available: http://www.autoguide.com/autonews/2014/07/need-know-autonomous-vehicles.html. [Accessed: 14-Jul-2016].

16. J. Hillenbrand. (2006, December) A Multilevel Collision Mitigation ApproachIts Situation


Assessment, Decision Making, and Performance Tradeoffs [Online]. Available:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4019437
17. W. Hu, X. Xiao, Z. Fu, D. Xie, T. Tan, and S. Maybank. A system for learning statistical
motion patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28:1450
1464, 2006.
18. J. Jannson and F. Gustafsson. (2008, September) A Framework and Automotive Application
of Collision Avoidance Decision Making [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0005109808000617
19. E. D. Kaplan and C. J. Hegarty, Understanding GPS: Principles and Applications; Second
Edition. Norwood, MA: Artech House, 2006.
20. A. L. Kun, S. Boll, A. Schmidt, Shifting Gears: User Interfaces in the Age of Autonomous
Driving, IEEE Pervasive Computing, Jan - Mar, 2016
21. N. G. Leveson, Safeware, System Safety and Computers, Addison-Wesley Publishing
Company, 1995
22. J. Levinson, M. Montemerlo, and S. Thrun, Map-Based Precision Vehicle Localization in
Urban Environments, Robotics: Science and Systems III, 2007.
23. P. Lin. (2016, May 22) Why Ethics Matters for Autonomous Cars [Online]. Available:
http://link.springer.com/chapter/10.1007/978-3-662-48847-8_4
24. N. Lum and E. Niedermeyer. (2016, July 14) How Tesla and Elon Musk Exaggerated Safety
Claims About Autopilot and Cars [Online]. Available:
http://www.thedailybeast.com/articles/2016/07/14/why-tesla-s-cars-and-autopilot-aren-t-assafe-as-elon-musk-claims.html
25. N. McBride, The Ethics of Driverless Cars, SIGCAS Computers & Society, vol. 45, no. 3,
Sept 2015, pp.179-184
26. J. O'Neil-Dunne, LiDAR 101, YouTube, 25-Dec-2013. [Online]. Available:
https://www.youtube.com/watch?v=1l0gwrlv2cm. [Accessed: 12-Jul-2016].
27. U. Ozgunger, K. RedMill, C. Stiller, Systems for Safety and Autonomous Behavior in Cars
The DARPA Grand Challenge Experience, Proceedings of the IEEE, vol. 95, no. 2, Feb
2007, pp. 397-412
28. L. Patrick. (2015 December 8) The Ethical Dilemma Of Self-Driving Cars [Online].
Available: http://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin
29. R. Regale. (2008, March 16) Using Ontology-Based Traffic Models for More Efficient
Decision Making of Autonomous Vehicles [Online]. Available:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4488328
30. C. Taylor. (2012, August 7) Googles Driverless Car is Now Safer than the Average Human
Driver [Online]. Available: http://mashable.com/2012/08/07/google-driverless-cars-saferthan-you/#oFwaOMPk0gqQ
31. S. Thrun and J. Leonard, Springer handbook of robotics. Berlin: Springer, 2008.
32. W. E. Wong, V. Debroy, A. Surampudi, H. Kim, and M. F. Siok, Recent Catastrophic
Accidents: Investigating How Software was Responsible, 2010 Fourth International
Conference on Secure Software Integration and Reliability Improvement, 2010.

Anda mungkin juga menyukai