Anda di halaman 1dari 78

Legal liability issues and regulation of

Artificial Intelligence (AI)


Dissertation work – Post Graduate Diploma in Cyber Laws and Cyber Forensics

Course Code: PGDCLCF


Submitted by: Jomon P Jose
Roll No. CLCF/588/17
Year : 2017-18

National Law School of India University


Bengaluru
0|Page
Contents
Legal Citation: Table of Statutes .................................................................................................................. 3
Case Laws ..................................................................................................................................................... 4
Introduction ................................................................................................................................................... 7
Methodology............................................................................................................................................. 7
Artificial Intelligence (AI) ............................................................................................................................ 8
What is AI................................................................................................................................................. 8
Types of AI ............................................................................................................................................... 9
AI Characteristics ................................................................................................................................... 10
AI - History............................................................................................................................................. 11
ABC3 - Engines that power AI............................................................................................................... 12
The AI Spectrum .................................................................................................................................... 14
AI Benefit and applications ........................................................................................................................ 18
AI - Applications and benefits ................................................................................................................ 18
AI applications in legal field .................................................................................................................. 21
Limitations for AI application in legal field ........................................................................................... 25
Future of AI ............................................................................................................................................ 26
AI Challenges and Issues ............................................................................................................................ 28
AI in the news May 2018 ....................................................................................................................... 28
AI and Human Rights – Job loss ............................................................................................................ 30
AI and Labor laws .................................................................................................................................. 31
AI and privacy issues.............................................................................................................................. 31
AI and IPR issues ................................................................................................................................... 32
IPR for AI creations ............................................................................................................................ 33
IPR for creation of AI ......................................................................................................................... 34
AI and competition concerns .................................................................................................................. 36
AI and Malpractice Claims ..................................................................................................................... 37
AI with IQ and EQ.................................................................................................................................. 38
AI overtaking Human beings .................................................................................................................. 39
AI and citizen rights ............................................................................................................................... 40
AI and security........................................................................................................................................ 41
AI and warfare and terrorism.................................................................................................................. 41

1|Page
AI and responsibility for actions............................................................................................................. 42
AI and Liability ........................................................................................................................................... 43
Liability .................................................................................................................................................. 43
AI Liability: ............................................................................................................................................ 44
Existing Liability Frameworks ............................................................................................................... 44
Tort – Respondeat Superior Liability ..................................................................................................... 45
Vicarious Liability / Agency Law .......................................................................................................... 45
Product Liability ..................................................................................................................................... 47
Product Liability through Negligence ................................................................................................. 47
Product Liability through Contract ..................................................................................................... 50
Product Liability through Consumer Law .......................................................................................... 52
Common enterprise Liability .................................................................................................................. 55
Criminal Liability ................................................................................................................................... 56
The Perpetration-by-Another Liability (PBAL) Model: AI as Innocent Agents ................................ 57
The Natural-Probable-Consequence Liability (NPCL) Model: Foreseeable Offenses ....................... 58
The Direct Liability (DL) Model: AI Robots as subject of Criminal Liability ................................... 59
What kind of defenses are available to AI? ........................................................................................ 60
AI: The causation challenge ................................................................................................................... 61
Strict Liability for Dangerous activities ................................................................................................. 62
AI Regulation .............................................................................................................................................. 64
Liability Regime ..................................................................................................................................... 64
Insurance................................................................................................................................................. 65
Individualization ..................................................................................................................................... 66
Licensing Model – Turing Registries ..................................................................................................... 67
AI – Explain the decision ....................................................................................................................... 67
AI Regulation – EU ................................................................................................................................ 69
AI and Cyber Security ............................................................................................................................ 70
AI Privacy and data protection safeguards ............................................................................................. 71
Building Capability................................................................................................................................. 71
International Cooperation ....................................................................................................................... 72
Conclusion .................................................................................................................................................. 73
Bibliography ............................................................................................................................................... 76

2|Page
Legal Citation: Table of Statutes

Sl Name of the statute Country Year


No.
1 Consumer Protection Act India 1986
2 Consumer Rights Act 2015 UK 2015
3 Contract (Rights of Third UK 1999
Parties) Act
4 The Information Technology India 2000
Act
5 The Copyright Act India 1957
6 Indian Contract Act India 1872
7 The Motor Vehicles Act India 1988
8 Public Liability Insurance Act India 1999
9 Sale of Goods Act UK 1979
10 Sale of Good Act India 1930
12 UNCITRAL Model Law on United Nations 1996
Electronic Commerce Commission on
International
Trade Law (UNCITRAL)
13 U.S. Code: Title 17 - USA 1976
COPYRIGHTS
14 U.S. Code: Title 35 - USA 1959
PATERNTS

3|Page
Case Laws

Sl Name of Name of Citation Year Subject Matter


No petitioner / respondent /
appellant defendant
1 Banker Hoehn 278 A.D.2d 720, 721, 718 2000 Patient’s claim against medical
N.Y.S.2d 438, 440 (2000) equipment manufacturer-
Intermediary doctrine / Privity of
contract
2 Blue Spike, LLC Google Inc. No. 14-CV-01650-YGR, 2015 WL 2016 Patentability of expert system in cars
5260506, at *5 (N.D. Cal. Sept. 8,
2015), aff'd, 2016 WL 5956746
(Fed. Cir. Oct. 14, 2016)
2 Donoghue Stevenson UKHL 100, SC (HL) 31, AC 562, Tort - product liability
All ER Rep 1
3 Ferguson Bombardier 244 F. App'x 944 (11th Cir. 2007) 2007 Liability for automated technologies –
Service Corp auto pilot manufacturing defect -
contributory negligence

4 FTC Tax Club, Inc. 994 F. Supp. 2d 461 (S.D.N.Y. 2014 Common enterprise doctrine
2014).

5 Go2Net, Inc. C I Host, Inc. 115 Wash. App. 73 (2003). 2003 Internet advertising impression by AI
6 Hadley Baxendale [1854] EWHC J70 1854 Foreseeability and causation is a
relevant factor in determining liability

4|Page
Sl Name of Name of Citation Year Subject Matter
No petitioner / respondent /
appellant defendant
7 Hess Advanced 106 F.3d 976, 981 (Fed. Cir. 1997 Inventorship – individuals
Cardiovascular 1997).
Sys.
8 Hewlett Packard ServiceNow, Inc. No. 14-CV-00570-BLF, 2015 WL 2015 Patentability of driverless cars
Co 1133244 (N.D. Cal. Mar. 10,
2015)
9 Hills Fanuc Robotics No. 04-2659, 2010 WL 890223, *1, 2010 Personal injury resulting from
Am., Inc. *4 (E.D. La. 2010) automated machines

10 Junior Books Veitchi 1982 1 AC 520 1983 Economic losses in tort - standard
11 Motorola Myriad France 850 F. Supp. 2d 878 (N.D. Ill. 2012 Good or service – Product liability
Mobility, Inc. SAS 2012)
12 Naruto Slater No. 3:2015-cv-04324, 2016 WL 2016 Copyright for animals
362231, *1 (N.D. Cal. Jan. 23,
2016).
13 Nelson American Airlines, 70 Cal. Rptr. 33 (Cal. Ct. App. 1968 Liability for automated technologies –
Inc. 1968), auto pilot in aircrafts

14 New Idea Farm. Sperry Corp. 916 F.2d 1561, 1566 n.4 (Fed. Cir. 1990 Inventorship for legal entities
Equip. Corp 1990)

5|Page
Sl Name of Name of Citation Year Subject Matter
No petitioner / respondent /
appellant defendant
15 O'Brief Intuitive Surgical No. 10 C 3005, 2011 WL 304079, 2011 Liability of surgical robots
Inc at *1 (N.D. Ill. Jul. 25, 2011)
16 Payne ABB Flexible 116 F.3d 480, No. 96-2248, 1997 1997 Personal injury resulting from
Automation Inc. WL 311586, *1-*2 (8th Cir. 1997) automated machines
17 re Ashley Madison -- 148 F. Supp. 3d 1378, 1380 2015 Patentability of computer program to
Customer Data (JPML 2015) simulate human interaction
Sec. Breach Litig.
18 re Toyota Motor -- 978 F. Supp. 2d 1053, 1100-01 2013 Unintended Acceleration in vehicles –
Corp. (C.D. Cal. 2013). liability when defect cannot be traced
19 United States Athlone Indus., 746 F.2d 977, id. at 979 (3d Cir. 1984 Whether robots can be sued and
Inc. 1984) liability
20 Vehicle Mercedes-Benz 635 F. App'x 917 (Fed. Cir. 2015), 2016 Patentability of expert system in cars
Intelligence & USA, LLC 136 S. Ct. 2390 (2016)
Safety LLC

6|Page
Introduction

20th century witnessed unprecedented growth in technology. This explosive growth in the field of
computing and communication technology is touching every facet of human life and society.
Industrial era gave way to internet era. Most of the countries, especially in the developing world
were taking baby steps in adopting UNCITRAL Model law on electronic commerce1 as their IT
Act or eCommerce Acts.

The developments of early 21st century is pointing to a new world where human intelligence will
be supplemented and/ or challenged by Artificial Intelligence (AI). AI is the capacity of a
computer to perform tasks commonly associated with human beings2. The changes and the pace
at which it is happening are mind boggling. Along with it comes the challenges and opportunities
for regulating interactions in the world in totally different forms: human to machine (H2M) and
machine to machine (M2M) interactions that impacts human beings. The rapid dissemination and
spread of the new order in a flat world; necessitates the development of jurisprudence that is
flexible and dynamic to accommodate current and future changes. Indeed, it is the need of the
hour for both developed and developing world.

This paper is an attempt to study the developments in Artificial Intelligence, it implications for
society in the form of opportunities and challenges, liabilities issues surrounding AI, adequacy of
existing liability frameworks to meet these issues and challenges. Some space is devoted at the
end of the paper to explore possible regulatory options and other recommendations.

Methodology

A descriptive and analytical approach is used in the development of paper. A detailed subject
research was conducted with secondary materials including published reports, news articles, books
and online materials. Author’s own experience in working in machine learning and process
automation was leveraged in critical analysis of literature.

1
UNCITRAL Model Law on Electronic Commerce, 1996
2
de Souza de Souza, S. P. (2017, November 16).

7|Page
Artificial Intelligence (AI)
The first part of this paper starts with definition of AI and characterizing it from other technologies.
A brief account of history and the factors that are driving the growth have been looked at. The
chapter ends with an overview to the spectrum of popular AI technologies.

What is AI

AI is the theory and development of computer systems able to perform tasks that normally require
human intelligence. Examples include tasks such as visual perception, speech recognition, decision
making under uncertainty, learning, and translation between languages. The relation between what
human can do and developments in computing are inter-related and dynamic. As a result, “the
meaning of “AI” evolves over time, a phenomenon known as the “AI effect” 3

AI means the capacity of a computer to perform tasks commonly associated with human beings.
This includes the ‘ability to reason, discover meaning, generalize, or learn from past experience’
and thereby find patterns and relations to respond dynamically to changing situations.4

AI, also called cognitive computing, is about machines thinking like humans and performing
human tasks. “Cognitive computing enables robots to learn,” says Garry Mathiason, co-chair of
the robotics, AI and automation industry group at Littler Mendelson in San Francisco. “In
traditional software, the possibilities are mapped out and predetermined. This has limited the
development and application of software-driven machines and robotics.

“However, this is dramatically changing with the introduction of cognitive computing. Modeled
after human learning, smart machines process massive data, identifying patterns. These patterns

3
Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4).
4
de Souza, S. P. (2017, November 16).

8|Page
are used to ‘create’ entirely new patterns, allowing machines to test hypotheses and find solutions
unknown to the original programmers.”5

Traditional software helps to automate human tasks with pre-determined rules and routines. The
computer fails to perform in a new situation that it encounters. AI systems possess the ability
understand the context by gathering and analyzing data and respond to changes in external
environment by making intelligent choices. AI is the advent of computers as smart as—or smarter
than—humans.

Three key aspects are associated with AI systems6

1. The capacity to find and gather information


2. The ability to analyze and make sense of the information gathered
3. To make decisions and initiate actions.

Types of AI
According to Garry Mathiason7,

There are two types of artificial intelligence—hard and soft. Hard AI is focused on having
machines think like humans, while soft AI is focused on machines being able to do work that
traditionally could only be completed by humans. The main difference is that soft AI doesn’t
necessarily involve machines thinking like humans.

Soft AI is the pursuit of developing machines capable of performing the tasks normally associated
with human effort and intelligence. Whereas, hard AI is creating machines that are as good as or
better than human beings.

5
Sobowale, J. (2016, April).
6
de Souza, S. P. (2017, November 16).
7
Sobowale, J. (2016, April).

9|Page
AI Characteristics
There are five attributes that one would expect an intelligent entity to have8

1. Communication
Communication is the ability to exchange and assimilate information between entities.
An intelligent entity should be able to communicate with other entities and human beings.
The more intelligent the entity, easier the communication process.

2. Mental knowledge
Mental knowledge is the understanding of the entity about itself. The self-awareness helps
intelligent entity to relate with other entities and its environment and regulate its behavior.

3. External knowledge
An intelligent entity should be able to know its environment, surroundings and context by
gathering information. It has the ability to know and learn about external world and use
this information in communication and interactions.

4. Goal-driven behavior
The one factor that differentiated human from animal is the ability to direct actions towards
a specific goal that lift the existence to next level. An intelligent entity should be able to
direct its own actions to achieve desired results.

5. Creativity
Creativity is the ability to take alternate course of action based on experience and cues from
the mental and external knowledge. The entity is a position to evaluate pros and cons
associated with the alternatives in achieving the goal before it set itself into action.

8
Friedman, D. (n.d.).

10 | P a g e
AI - History9

AI , the term itself dates from the 1950s. The attempt to simulate human intelligence started in the
1950s. The researchers have developed a range of demonstration programs through the 1960s.
During '70s, computers were able to accomplish a number of tasks once thought to be solely the
domain of human beings. Scientists have used computers for proving theorems, solving calculus
problems, responding to commands by planning and performing physical actions etc.

In the early 1980s, Japan launched a program to develop an advanced computer architecture that
could advance the field of AI. After losing battle in automobile market, the Western anxiety about
losing ground to Japan contributed to decisions to invest in AI. The 1980s saw the launch of
commercial vendors of AI technology products, some of which had initial public offerings, such
as Intellicorp, Symbolics, and Teknowledge. By the end of the 1980s, half of the Fortune 500 were
developing or maintaining “expert systems,” an AI technology that models human expertise with
a knowledge base of facts and rules. However, the scientists hit many roadblocks in leveraging
the true potential of expert systems. The systems lacked common sense and missed experts’ tacit
knowledge. The cost and complexity of building and maintaining large systems, forced AI running
out of steam.

In the 1990s, technical work on AI continued with a lower profile. Techniques such as neural
networks and genetic algorithms received fresh attention. The design of neural networks is
inspired by the structure of the brain. Genetic algorithms aim to “evolve” solutions to problems
by iteratively generating candidate solutions, eliminating the weakest, and introducing new
solution variants by introducing random mutations.

Major strides in AI development happened in the new millennium. The factors that propelled the
AIT to forefront are discussed in next part.

9
Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4).

11 | P a g e
ABC3 - Engines that power AI

Once considered a remote possibility for a futuristic tomorrow, the advances in technology over
the past 20 years have accelerated the development and integration of AI in both private and public
sectors. A number of factors helped renew progress in AI during early 2000s. The factors most
responsible for the recent progress in AI can be summarized as ABC3: Algorithms, Big Data,
Computing power, Cloud and Capital.

Algorithms

An algorithm is a routine process for solving a program or performing a task. In recent years, new
algorithms have been developed that dramatically improved the performance of machine learning,
an important technology in its own right and an enabler of other technologies such as computer
vision10. Many of the machine learning algorithms are now available on an open-source basis.
This provides further impetus to foster improvements as developers contribute enhancements to
each other’s work.

Big data

There is an enormous amount of data that is being generated with the wide presence of internet,
social media, mobile devices, and low-cost sensors. According to IBM, 2.5 quintillion
(2,500,000,000,000,000,000) bytes of data are created every day, and 90 percent of all data was
created within the last two years11.

Growing understanding of the potential value of this data has led to the development of new
techniques for managing and analyzing very large data sets. Big data has been a boon to the
development of AI. The reason is that some AI techniques use statistical models for reasoning
probabilistically about data such as images, text, or speech. These models can be improved, or
“trained,” by exposing them to large sets of data, which are now more readily available than ever.

10
Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4).
11
Friedman, D. (n.d.).

12 | P a g e
Computing Power.

Today, the computing power necessary to implement advanced system designs is readily available.
Sophisticated computations that were not possible few decades ago in the best available computers
are now possible with desktop computers. The current generation of microprocessors delivers 4
million times the performance of the first single-chip microprocessor introduced in 1971. Moore’s
law, named after Intel cofounder Gordon Moore states that the processing power of computers
doubles every two years12.

Cloud, internet and collaboration

The cloud computing and internet can be credited with advances in AI for two reasons. First, they
make available vast amounts of data and information to any internet-connected computing device.
This has helped propel work on AI approaches that require large data sets. Second, they have
provided a way for humans to collaborate in helping to train AI systems. Google’s language
translation project analyzes feedback and freely offers contributions from its users to improve the
quality of automated translation. Cloud and internet have opened up co-creation opportunities for
people who are in different corners of the world.

Capital

Availability of capital is a pre-requisite for technological development. The investments are


pouring into AI startups from venture capital funds and technology giants.

From 2011 through May 2014, over $2 billion dollars in venture capital funds have flowed to
companies building products and services based on cognitive technologies.13 During this same
period, over 100 companies merged or were acquired, some by technology giants such as Amazon,
Apple, IBM, Facebook, and Google. IBM has committed $1 billion to commercializing Watson,
its cognitive computing platform. Google has made major investments in AI in recent years,
including acquiring eight robotics companies and a machine-learning company. Facebook hired
AI luminary Yann LeCun to create an AI laboratory with the goal of bringing major advances in

12
de Souza, S. P. (2017, November 16
13
Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4).

13 | P a g e
the field. In 2015 alone, over $2.4 billion in venture capital was invested into the development of
AI-based technologies14.

A quick look at these factors clearly indicates synergetic effect of factors which is propelling
growth to orbits which were not attainable earlier.

The AI Spectrum

AI encompasses a wide spectrum of technologies. The spectrum ranges from simple automation
(systems that do – soft AI) to autonomous decision-making (systems that learn – hard AI). 15

Source: Bart Van der Mark, A Primer On Robotic Process Automation, digitally.cognizant.com, 27 January 2016

14
Emanuel; , Quinn;. (2016, December).
15
Norton Fullbright. (n.d.).

14 | P a g e
AI Technology Description and usage
Expert systems The expert systems which have the ability to perform tasks which need the
kind of expertise humans have. Expert system works on inductive reasoning
based mainly on “if–then” rules or logic programming16.

Machine Machine learning refers to the ability of computer systems to improve their
learning
performance by exposure to data without the need to follow explicitly
programmed instructions. At its core, machine learning is the process of
automatically discovering patterns in data. Once discovered, the pattern can
be used to make predictions. For instance, presented with a database of
information about credit card transactions, such as date, time, merchant,
merchant location, price, and whether the transaction was legitimate or
fraudulent, a machine learning system learns patterns that are predictive of
fraud. The more transaction data it processes, the better its predictions are
expected to become17.

Machine learning can adapt its programming based on the training process
and feedback, and the data can be represented by various graph and network
structures. For example, an artificial neural network (ANN) or neural net is
a system designed to process information in a way that is inspired by the
framework of biological brains.

Machine learning also can be used for sales forecasting, inventory


management, oil and gas exploration, and public health.

Natural Natural language processing refers to the ability of computers to work


Language with text the way humans do, for instance, extracting meaning from text or
Processing even generating text that is readable and grammatically correct. It can derive
meaning, context, or sentiment in textual data. It can manipulate text in

16
de Souza, S. P. (2017, November 16).
17
Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4).

15 | P a g e
AI Technology Description and usage
sophisticated ways, such as automatically identifying all of the people and
places mentioned in a document; identifying the main topic of a document;
or extracting and tabulating the terms and conditions in a stack of human-
readable contracts. NLP can be used in analyzing customer feedback
(sentiment analysis) about a particular product or service, automating
discovery in civil litigation or government investigations (e-discovery).
Chatbots are widely used answer customer queries.
Speech recognition involves the conversion of speech to text functions and
Speech
vice versa. The technology should handle the difficulties of coping with
Recognition
diverse accents, background noise, distinguishing between homophones
(“buy” and “by” sound the same), and the need to work at the speed of
natural speech. Speech recognition systems use some of the same techniques
as natural language processing systems, along with acoustic models that
describe sounds and their probability of occurring in a given sequence in a
given language18. Applications include medical dictation, hands-free
writing, voice control of computer systems, and telephone customer service
applications. Domino’s Pizza recently introduced a mobile app that allows
customers to use natural speech to order, for instance.
Computer vision refers to the ability of computers to identify objects,
Computer
scenes, and activities in images. Computer vision technology uses sequences
Vision
of imaging-processing operations and other techniques to decompose the
task of analyzing images into manageable pieces.19

Computer vision has diverse applications, including analyzing medical


imaging to improve prediction, diagnosis, and treatment of diseases; face
recognition, used by Facebook to automatically identify people in
photographs and in security and surveillance to spot suspects; and in

18
de Souza, S. P. (2017, November 16).
19
Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4).

16 | P a g e
AI Technology Description and usage
shopping—consumers can now use smartphones to photograph products and
be presented with options for purchasing them.

Machine vision, a related discipline, refers to vision applications in


industrial automation, where computers recognize objects such as
manufactured parts in a highly constrained factory environment—rather
simpler than the goals of computer vision, which seeks to operate in
unconstrained environments.

Robotics, integrates multiple cognitive technologies such as computer


Robotics
vision and automated planning with tiny, high-performance sensors,
actuators, and cleverly designed hardware to automate and mechanically
control precise machine movements.20 Robots work alongside people and
flexibly perform many different tasks in unpredictable environments. The
use of AI systems to Examples include unmanned aerial vehicles, “cobots”
that share jobs with humans on the factory floor, robotic vacuum cleaners,
and a number of consumer products, from toys to home helpers.

The use of multiple layers of abstract representations of data to optimize the


Deep learning
machine learning process inspired by information processing and
communication patterns in biological nervous systems. Deep learning (also
known as deep structured learning or hierarchical learning) is part of a
broader family of machine learning methods based on learning data
representations, as opposed to task-specific algorithms.

20
Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4).

17 | P a g e
AI Benefit and applications

Traditional Software can make decision based on pre-defined rules in the system. What
differentiates AI is its ability make decisions autonomously. AI can help humans make better
decisions.

AI has huge potential to bring accuracy, efficiencies, cost savings and speed to a whole range of
formerly human activities and to provide entirely new insights to transform businesses and the
services and products they offer21.

This section explores the benefits of AI and its applications to address real life problems. First part
of this section focuses on applications in general while second part focuses on implications and
applications of AI in legal profession.

AI - Applications and benefits

Well-known companies such as Google, Facebook, Apple and Uber, as well as start-ups, are active
in the research and development of innovative AI technology-based products22. Examples include
self-driving cars, robotic surgical equipment, complex automated accounting and security systems,
and even software performing legal tasks such a document review or research. AI has innumerable
practical applications, including medical diagnosis expert systems that emulate the decision-
making of physicians, automated securities trading systems, automated drones, and many other
variants. Developing along with AI is the development of natural language processing, which in
the broadest sense concerns the interactions between computer programs and human beings such
that computers are learning to emulate human communication.

21
Kingsman, M. (2017, January 30).
22
Emanuel; , Quinn;. (2016, December).

18 | P a g e
The benefits of AI can be broadly classified under 6 headings23.

1. Faster actions and decisions

AI can process huge amount of data within short span on time and make decisions based on
underlying patterns.

In banking, automated fraud detection systems use machine learning to identify behavior patterns
that could indicate fraudulent payment activity. The public sector is also adopting cognitive
technologies for a variety of purposes including surveillance, compliance and fraud detection, and
automation.

Mature cognitive technologies include optimization, which automates complex decisions and
trade-offs about limited resources; planning and scheduling, which entails devising a sequence of
actions to meet goals and observe constraints.

2. Better outcomes

AI can concurrently analyze vast amount of data which is beyond the attention and focus of human
mind. This makes AI as better choice for example, medical diagnosis, oil exploration, demand
forecasting etc.

Computer vision systems automate the analysis of mammograms and other medical images. IBM’s
Watson uses natural language processing to read and understand a vast medical literature,
hypothesis generation techniques to automate diagnosis, and machine learning to improve its
accuracy. Machine learning algorithms can predict chances of infection and survival chances for
patients by analyzing clinical data available with healthcare providers.

In life sciences, machine learning systems are being used to predict cause-and-effect relationships
from biological data and the activities of compounds, helping pharmaceutical companies identify
promising drugs.

23
Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4).

19 | P a g e
Oil and gas producers use machine learning in a wide range of applications, from locating mineral
deposits to diagnosing mechanical problems with drilling equipment.

Retailers use machine learning to automatically discover attractive cross-sell offers and effective
promotions.

3. Greater efficiency

AI can help businesses and services provider to extract better value and outputs from the resources
that they deploy.

In health care, automatic speech recognition for transcribing notes dictated by physicians is used
in around half of US hospitals, and its use is growing rapidly.

4. Lower cost

The increase in efficiency can definitely lower the cost of operations. The businesses all around
the globe are in a constant pursuit to reduce cost of operations. They look at AI to automate and
augment costly human resources.

Speech recognition technology is widely used to automate customer service telephone interactions,
and voice recognition technology to verify the identity of callers.

5. Greater scale

AI can be a key driver for organization to achieve economies of scale and better reach to market.
Today, organizations can perform large-scale tasks with the aid of AI, which were hitherto
impractical to do manually.

AI application is Linklaters’ Verifi program24, which can sift through 14 UK and European
regulatory registers to check client names for banks and process thousands of names overnight. A
person employed to do the same job would take an average of 12 minutes to search each customer
name.

24
Friedman, D. (n.d.).

20 | P a g e
6. Product and Service Innovation

Product and service companies are relying on AI to add new features and creating entirely new
products.

Technology companies are using cognitive technologies such as computer vision and machine
learning to enhance products or create entirely new product categories, such as the Roomba robotic
vacuum cleaner or the Nest intelligent thermostat.

In media and entertainment, several companies are using data analytics and natural language
generation technology to automatically draft articles and other narrative material about data-
focused topics such as corporate earnings or sports game summaries.

The next part of this chapter covers developments in application of cognitive technologies to
legal field.

AI applications in legal field

Artificial intelligence can change the way lawyers think, the way they do business and the way
they communicate and interact with their clients, provide services and how legal research is
performed.

The pressure from tech-savvy corporate clients questioning the size of their legal bills and wanting
to reduce risk are forcing leading law firms to adopt technology. Legal firms are also facing
competition for accounting firms, which have begun to offer legal services and to use technology
to do routine work25. A growing interest in “big data” and natural language processing has resulted
in start-ups seeking to tackle the difficult task of aggregating, synthesizing and modeling a
collective corpus of case law. This led to emergence of “Lawtech” start-ups, often set up by ex-
lawyers and so-called because they use technology to streamline or automate routine aspects of
legal work. Lawtech has been compared to fintech, where small, nimble tech companies are trying
to disrupt the business models of established banks.

25
Croft, J. (2016, October 6).

21 | P a g e
Legal Research

AI can classify and organize data faster, better and cheaper. It augments human intelligence and
empowers people to make use of huge amounts of data to make better decisions.26 No lawyer on
the planet could possibly read all of the cases, laws and regulations that come out. However, ROSS
[a legal research tool built using Watson software] would have access to every case, every piece
of information, every regulation and legislation in the world fed into it.

RavelLaw uses natural language processing to identify, extract and classify information from legal
documents, automating basic case law analysis to make research more efficient and targeted27. The
company hopes to add automated analysis of briefs, wording recommendations for particular
judges, and probability-based outcome predictions to litigators and their clients.

What differentiate a senior lawyer from junior lawyer is experience. AI gives a window of
opportunity for to cover this knowledge gap by making information readily available. AI can be
deployed for discovery exercises in litigation, which can involve laborious hours of document
word searches

Reviews and Risk Assessment

Another area that has had significant penetration within law firms and with clients, is the use of
AI to review documents. The advent of e-discovery is such that it is as efficient, or economical,
to have attorneys conduct first reviews of the massive volumes of documents collected in large
litigations28
Professor Richard Susskind, a technology consultant and co-author of The Future of the
Professions: How Technology Will Transform the Work of Human Experts, predicts incremental
transformations in areas like the way documents are reviewed and the way legal risk is assessed.”

26
Queen Law debate. (n.d.).
27
Norton Fullbright. (n.d.).
28
Queen Law debate. (n.d.).

22 | P a g e
Contract review and risk assessments requires considerable amount of time reviewing documents.
Such jobs are monotonous and depends heavily on the experience and diligence of the reviewer.
Corporates might minimize risk by focusing on higher-value contracts and ignoring contracts
under a certain value. ThoughtRiver’s contract review software uses AI to scan and interpret
information from all written contracts used in commercial risk assessments and presents it in a
central online dashboard that enables users to assess risk more easily.

.
Legal compliance

It is important for organizations to ensure compliance assurance programs to avoid costly


litigations. AI software can identify patterns in the data and flag possible issues. It is possible feed
key issues and fact patterns common to certain type of legal matter and build models to identify
documents that should be looked at first29. Such systems can detect problems closer to where they
are happening and help in collecting the necessary documents for any possible litigation.

For example, for trade secret theft, we can identify behaviors that can quickly pinpoint the time
frame the theft occurred, how it was accomplished and who was involved

Legal Drafting 30

Many believe AI will allow lawyers to focus on complex, higher-value work. An example is
Pinsent Masons, whose TermFrame system emulates the decision-making process of a human. It
was developed by Orlando Conetta, the firm’s head of R&D, who has degrees in law and computer
science and did an LLM in legal reasoning and AI. TermFrame guides lawyers through different
types of work while connecting them to relevant templates, documents and precedents at the right
moments.

29
Friedman, D. (n.d.).
30
Croft, J. (2016, October 6).

23 | P a g e
MarginMatrix odifies the law in various jurisdictions and automates drafting of certain
documents. The time to draft a document will fall from three hours by a lawyer to three minutes.

Investigations

AI can guide compliance departments to streamline internal investigations to get to key


information within hours. IT professionals have been pressed into investigations of data breaches
in many instances. It is important to prevent data breaches wherever possible. In the Sony data
breach, unstructured data was exposed that was financially damaging and embarrassing to the
company.

NexLP’s Story Engine is a program that can read through unstructured data and summarize
conversations, including the ideas discussed, the frequency of the communication and the mood
of the speakers. The company uses the data to build models to analyze behavior and find signs of
fraud or litigation31.

When investigating securities fraud, price movement can be a very useful indicator. An AI analytic
engine can overlay communications between traders discussing that stock-on-top-of-price-
movement data to compare the times they both occurred. By comparing these various data points,
a clear pattern can quickly emerge—one that might have previously gone unseen or would have
been considered circumstantial. These patterns allow financial firms to better understand and
identify this behavior to prevent it.

Predicting legal outcomes32

Another potential use for data is predicting legal outcomes – getting a high/low or an X per cent
chance that the outcome will be this or that. In 2014, Chicago-: College of Law professor Daniel
Martin Katz, then at Michigan State University law school, and his colleagues created an algorithm

31
Sobowale, J. (2016, April).
32
Sobowale, J. (2016, April).

24 | P a g e
to predict the outcomes of U.S. Supreme Court cases. It attained 70 percent accuracy for 7,700
rulings from 1953 to 2013.

One another example is www.wevorce.com. After getting clients to fill in a form and provide
information, it uses algorithms to try to predict how the divorce will progress and then provides
services to their clients based on that prediction.

Limitations for AI application in legal field33

Rules and regulations for lawyers

Although AI offers immense potential to legal profession, this opportunity is constrained by


regulations that surround layers in different geographies. The rules designed decades back will be
a constraint to apply technology to making law more efficient.

Lawyering skills

The possibility for AI Lawyer replacing lawyer in courtrooms is a distant possibility. Apart from
legal knowledge, a good lawyer must possess skillsets including analysis, empathy, good
judgement. This emotional content of the role is not easy to replace with machines. Computers
might face challenges to deal with litigation or playing towards the judge.

Availability of content

Legal research is a daunting task. It is hard to find relevant materials including case laws related
to wide array of subjects. The programmers have not been able to acquire all the case law and all
the data they need to make legal research tools more robust. Even when it is available, the holder
of such content would prefer to use it as competitive advantage in his / her profession than feeding
the content into computer system for wider benefit of all.

33
Queen Law debate. (n.d.).

25 | P a g e
Future of AI34

Impact of cognitive technologies on business should grow significantly over the next decade and
beyond. This is due to two factors. First, the performance of these technologies has improved
substantially in recent years, and we can expect continuing R&D efforts to extend this progress.
Second, billions of dollars have been invested to commercialize these technologies. At present the
application is limited to narrower set of prioritized use cases. Once performance of technologies
improve and commercialization expands, application horizons will broaden.

Examples of the strides made by cognitive technologies are easy to find. The voice recognition
systems that required painstaking training and could only work well with controlled vocabularies,
has now found application in specialized areas such as medical dictation but did not gain wide
adoption. The accuracy of Google’s voice recognition technology, for instance, improved from 84

34
Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4).

26 | P a g e
percent in 2012 to 98 percent less than two years later, according to one assessment. Today, tens
of millions of Web searches are performed by voice every month.

Computer vision has progressed rapidly. A standard benchmark used by computer vision
researchers has shown a fourfold improvement in image classification accuracy from 2010 to
2014. Facebook reported in a peer-reviewed paper that its DeepFace technology can now
recognize faces with 97 percent accuracy. Computer vision systems used to be confined to
industrial automation applications but now, as we’ve seen, are used in surveillance, security, and
numerous consumer applications.

IBM was able to double the precision of Watson’s answers in the few years leading up to its
famous Jeopardy! victory in 2011. The company now reports its technology is 2,400 percent
“smarter” today than on the day of that triumph. IBM is now seeking to apply Watson to a broad
range of domains outside of game-playing, from medical diagnostics to research to financial advice
to call center automation.

AI market is expected to be worth more than US$46 billion by 202035.

35
Kingsman, M. (2017, January 30).

27 | P a g e
AI Challenges and Issues

‘Technology is a boon or bane’ is a question that always remained in human mind through ages
while human evolution and revolutions have taken place. AI is no exception. Along with the
opportunities and benefits it brings to the table, there are equal number of issues and challenges
associated with AI. A quick exploration of these issues and challenges start with a quick review of
recent news headlines.

AI in the news May 2018

‘Alexa records, shares private conversation’ – The Times of India, Bengaluru May 26, 2018,
Page 16
A couple’s private conversation was mysteriously recorded by their Amazon Echo device and
sent to one of their contacts, igniting privacy concerns about the voice activated gadgets that the
online retailer wants to make as commonplace in homes as TVs.

‘Google, FB face first GDPR complaints’ – The Times of India, Bengaluru May 26, 2018,
Page 16

As Europe’s new privacy law took effect on Friday, one activist wanted no time in asserting the
additional rights it gives people over the data that companies want to collect about them. Austrian
Max Schrema filed complaints against Google, Facebook and WhatsApp, arguing they were
illegally by forcing users to accept intrusive terms of service or lose access.

‘EU states agree to make search engine pay for news’ – The Times of India, Bengaluru May
26, 2018, Page 16
Search engine like Google and Microsoft’s Bing could be made to pay for showing snippets of
news articles under draft copyright rules endorsed by EU ambassadors on Friday. The measure,

28 | P a g e
which is not yet final, would allow press publishers to ask search engines to pay them for showing
their articles for up to one year after publication.

‘Tesla hits parked police car, drive blames ‘autopilot’ – The Times of India, Bengaluru May
31, 2018, Page 14

The driver of a Tesla Inc Model S crashed into an unoccupied, parked police vehicle in Laguna
Beach, California, on Tuesday and the driver told investigators that the Tesla was in ‘Autopilot’
mode at the time, police said. Auto pilot is a semi-autonomous technology that the company say
is a form of advanced cruise control.

‘India gears up for AI-driven wars’ – The Hindu, Bengaluru May 21, 2018, Page 11

In an ambitious defense project, the government has started work on incorporating Artificial
Intelligence to enhance the operational preparedness of the armed forces in a significant way,
which would include equipping them with unmanned tanks, vessels, aerial vehicles and robotic
weaponry. The move, part of broader policy initiative to prepare the Army, the Navy and the Air
Force for next- generation warfare, comes amid rising Chinese investments in AI. Military sources
said the application of AI in border surveillance could significantly ease the pressure on armed
forces personnel guarding the sensitive frontiers with China and Pakistan”

‘Google to scrap AI project with US Military’ – Sunday Times of India, Bengaluru June 3, 2018,
Page 13

Alphabet Inc’s Google will not renew a contract to help the US military analyze aerial drone
imagery when it expires in March”. The defense programme, called Project Maven, set of revolt
inside Google, as factions of employees opposed Google technology being used in warfare. The
dissidents said it clashed with the company’s stated principle of doing no harm and cited risk
around using nascent AI technology in lethal situations. More than 4600 employees signed a
petition calling for Google to cancel the deal.

29 | P a g e
The 6 news headlines that appeared in a span of 15 days in Indian news media opens the window
to ethical, moral and legal challenges around AI. It is imperative that those who developing or
using AI must look at the associated risks including legal risks.

There are essentially two perspectives around AI36.

First is the techno-utopian perspective propounded by Iain Banks, in his “Culture” novels,
suggesting that AI is going to make the future much better for humans, bringing us untold wealth
and prosperity. A recent McKinsey research report describes AI as a contributing factor to the
transformation of society. Report states that the transformation is happening ten times faster, and
at three hundred times the scale, or roughly three thousand times faster than the impact of the
industrial revolution.

On the other end is the dooms day prediction by, Stephen Hawking and Elon Musk who
predicted as AI as the greatest existential threat to humanity.

This section covers the challenges and issues that emerge from AI adoption in day to day life.

AI and Human Rights – Job loss

The debate in the AI world is around automation, where the machine does everything,
versus augmentation, where the machine helps the person to accomplish the task37. The New York
Times bestseller The Second Machine Age argued that digital technologies and AI are poised to
bring enormous positive change, but also risk significant negative consequences as well. AI has
the power to snatch jobs from people. This in future will have both societal and political
implications. Researchers at the University of Oxford published a study estimating that 47 percent
of total US employment is “at risk” due to the automation of cognitive tasks38. A study by Deloitte

36
Queen Law debate. (n.d.).
37
Queen Law debate. (n.d.).
38
Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4).

30 | P a g e
has suggested that technology is already leading to job losses in the UK legal sector, and some
114,000 jobs could be automated within 20 years39.

Kay Firth-Butterfield in Queens law debate remarked “industrial revolution hurt a lot of people
over a long period. it looks as if this industrial revolution will be much faster, and we need to
prepare not to hurt as many people very quickly.

The society would have to cushion the effect of job loss in multiple fronts. First, assistance and
support for those who lose their jobs Second, retool, reskill and redeploy resources in other areas
Third, in a world where everything is done by machines, it will create an unhealthy population
without any purpose in life.

AI and Labor laws

AI deployment can lead to Industrial Relations problems. Similar concerns existed at the time
when computers were introduced in offices.

“When AI is coupled with big data, the solutions formed can unintentionally conflict with
workplace laws, some of which were written 50 to 100 years ago,” says Garry Mathiason of Littler
Mendelson. “For example, big data shows that the closer one lives to where one works correlates
with employee tenure in job. If such a software is used for screening applicants, those hired will
be living in neighbouring areas where the company is located. If the diversity in the neighbourhood
is not adequate enough, similar pattern will be reflected in its employee pool as well40. Dealing
with the liability of machines causing injury to a human colleague is another challenge. The
liability issues with AI is discussed in next chapter.

AI and privacy issues


Privacy laws came back in the 1970's when automated data processing became popular. "This
person lives at this address; they're getting this type of heart medication; they also are on this type

39
Sobowale, J. (2016, April).
40
Sobowale, J. (2016, April).

31 | P a g e
of insurance." Artificial intelligence can do similar things at totally different scale impacting lives
of many41. Controversy surrounding Cambridge Analytica is yet to get settled. No one would have
had imagined that an innocuous Facebook survey could be leveraged to influence poll outcome in
leading democracies. When a machine used to interview potential job candidates and measures the
person’s blood pressure, privacy becomes a key issue.

The reaction of the couple whose privacy was invaded by Alex says it all “I felt invaded. I’m never
plugging that device in again, because I can’t trust it”. This is where the protection of personal
data and privacy becomes a determining factor for the successful deployment of AI based
solutions. Users must have the assurance that personal data are not a commodity and know they
can effectively control how and for what purposes their data are being used. A data protection legal
framework that take into account the future possibilities is the need of the hour.

AI and IPR issues


The vast potential for commercializing AI development is going to be fertile ground for IPR related
litigation. If one look at Google or Facebook, market capitalization depends heavily on the AI based
underlying technology used by these companies to offer product and services.

Taking the driverless car as an example, the final product is an assemblage of many and varied
integrated systems that are produced by multiple manufacturers. For a driverless car to work
effectively, it needs sensors to navigate road obstructions, such as radar and laser detection. It must
have a computer to direct its actions and that computer needs to have a logic framework within
which to operate – internally by use of its own operating software and also externally by reference
to map data. All these systems need to work together effectively, and this is without consideration
of all the usual mechanical components which form a standard car, which must also be present and
functioning. This landscape definitely poses issue in relation to IPR for the components and
product liability issues due to sub-system failure42.

41
Kingsman, M. (2017, January 30).
42
Buyers, J. (2015, January).

32 | P a g e
The issues around Intellectual Property Rights (IPR) are discussed under two headings.

1. IPR for AI
2. IPR for creations of AI

IPR for AI creations43

AI technologies have also been at issue in patent cases, and such cases are certain to increase.
Whether the AI subject matter at issue is patent-eligible subject matter under 35 U.S.C. § 101 is
the first question to ask. Courts addressing this question must first ask whether a patent’s claims
are directed to a patent-ineligible concept, such as laws of nature or abstract ideas. If not directed
to such a concept, a patent will be enforceable under this test.

However, if a patent’s claims are directed to a patent-ineligible concept, the analysis moves to a
second step: whether the patent claims, despite being directed to a patent-ineligible concept, are
nevertheless patent-eligible because they include a sufficiently “inventive concept”.

In Vehicle Intelligence & Safety LLC v. Mercedes-Benz USA, LLC, 635 F. App'x 917 (Fed. Cir.
2015), cert. denied, 136 S. Ct. 2390 (2016), court had dismissed certain claims directed to the use
of “expert system(s)” to screen equipment operators for impairments such as intoxication as patent-
ineligible. The Vehicle Intelligence Court first determined that the claims at issue were directed to
a patent-ineligible concept— “the abstract idea of testing operators of any kind of moving
equipment for any kind of physical or mental impairment.” The “expert system” concept was
considered abstract because, based on the definition assigned to it by the Court during claim
construction, it was something performed by humans absent automation, and also because “neither
the claims at issue nor the specification provide any details as to how this ‘expert system’ works
or how it produces faster, more accurate and reliable results.” This lack of clarity contributed to a
holding of lack of inventive concept in the second step, rendering the patent claims at issue

43
Emanuel; , Quinn;. (2016, December).

33 | P a g e
unenforceable. The Federal Circuit compared the patent as equivalent to “a police officer field-
testing a driver for sobriety.”

In Blue Spike, LLC v. Google Inc., No. 14-CV-01650-YGR, 2015 WL 5260506, at *5 (N.D. Cal.
Sept. 8, 2015), aff'd, 2016 WL 5956746 (Fed. Cir. Oct. 14, 2016), the Court found that because
the patents at issue sought to model on a computer “the highly effective ability of humans to
identify and recognize a signal,” the patents simply cover a general-purpose computer
implementation of “an abstract idea long undertaken within the human mind.” The Blue Spike
Court also found that the second step of the eligibility inquiry for “inventive concept” was not
present as the claims “cover a wide range of comparisons that humans can, and indeed, have
undertaken since time immemorial.”

District Court of Northern District of California has considered the patentability of driverless cars
and automated support programs. In Hewlett Packard Co. v. ServiceNow, Inc., No. 14-CV-00570-
BLF, 2015 WL 1133244 (N.D. Cal. Mar. 10, 2015), Judge Freeman found that HP patents were
directed to the abstract idea of “automated resolution of IT incidents” and were not patent-eligible.
While rejecting evidence of commercial success as evidence of an “incentive concept,” Judge
Freeman considered the patents on self-driving cars in the context of patent eligibility. She
remarked that while a self-driving car may be very commercially successful, novel, and non-
obvious, the concept of a self-driving car is still abstract. So, while an inventor “may be able to
patent his specific implementation,” Judge Freeman disagreed that the concept of self-driving cars
could be patented in the abstract. While Judge Freeman’s hypothetical is likely dicta, it
nevertheless serves as a guidepost regarding patent eligibility of self-driving vehicles.

IPR for creation of AI44

For patent litigation involving AI technologies, another contentious area for litigation area is in
the determination of inventorship. An interesting question to be answered is whether AI could

44
Emanuel; , Quinn;. (2016, December).

34 | P a g e
claim inventorship. It is well-settled that an inventor can use “the services, ideas, and aid of others
in the process of perfecting his invention without losing his right to a patent.”

Hess v. Advanced Cardiovascular Sys., 106 F.3d 976, 981 (Fed. Cir. 1997). Furthermore, 35
U.S.C. Section 103 states: “Patentability shall not be negated by the manner in which the invention
was made.” This means that AI augmentation in the inventive process cannot deny patentability.
However, the patent statutes define “inventor” to mean “the individual . . . who invented or
discovered the subject matter of the invention” and the statutes also describe joint inventors as the
“two or more persons” who conceived of the invention. See 35 U.S.C §§ 100, 116(a).

The Federal Circuit has explicitly barred legal entities from obtaining inventorship status because
“people conceive, not companies.” New Idea Farm. Equip. Corp. v. Sperry Corp., 916 F.2d 1561,
1566 n.4 (Fed. Cir. 1990).

The US Copyright Office has already announced that it “will not register works produced by a
machine or mere mechanical process that operates randomly or automatically without any creative
input or intervention from a human author.” U.S. Copyright Office, The Compendium of U.S.
Copyright Office Practices § 306 (3d ed. 2014); see also U.S. Copyright Office, The Compendium
of U.S. Copyright Office Practices § 202.02(b) (2d ed. 1984), available at
http://copyright.gov/history/comp/compendium-two.pdf (“The term ‘authorship’ implies that, for
a work to be copyrightable, it must owe its origin to a human being.”).

The 2014 iteration of the Human Authorship Requirement was partially the result of a prominent
public discourse about non-human authorship stemming from the “Monkey Selfies.” See Naruto
et al v. Slater, No. 3:2015-cv-04324, 2016 WL 362231, *1 (N.D. Cal. Jan. 23, 2016).

35 | P a g e
Source: Google. Naruto selfie captured in the camera of British nature photographer David Slater

AI and competition concerns

McKinsey’ comment on AI’s potential to transform society at scale, pace and pervasiveness which
is beyond human imagination referred earlier also point to the possibility of AI innovators
becoming monopolists or having undue influence in the market place that they are in. It is possible
for the authors of AI software systems grabbing a big chunk of revenue and making small number
of people ultra-rich resulting in inequality in society. In a winner take all scenario, resource
amassment by such players further aggravate the problem. Sudden rise of companies like Google
and Facebook displacing centuries old ‘great’ companies on the top of the list is a pointer to
competition concerns. Such shift in power can create important security concerns at different levels
including for nation states.

36 | P a g e
AI and Malpractice Claims

Legal and Medical professions are among the professions that require the greatest decision-making
and exercise of judgment45. It is because of this that claims of malpractice are available to those
who rely on the decision-making and judgment of the skilled, trained professionals who practice
in these fields. It is also the case that these two fields are introducing an increasing number of AI-
based technologies.

In the medical industry, robotic surgical instruments and cancer treatment devices, as well as the
continued development and adoption of IBM’s Watson for medical treatment has led to increased
analysis of potential liability for the use of such instruments and devices. It is possible to envision
a medical malpractice action based on a lack of informed consent arising when a physician fails to
inform the patient of all relevant information about a course of treatment, including any risks
associated with the use of autonomous machines for such treatment.

While AI innovations are certain to save time and money, there are concerns that AI technology,
when used to replace human professional judgment, could lead to increased claims raising complex
issues of causation, legal duties, and also liability. A separate chapter is devoted to discuss the
aspects of liability.

AI and locus of control

AI adoption, say driving an autonomous vehicle; means handing over the judgement and decisions
to machines. This mean loss of locus of control for human beings. When you're in an autonomous
car, you know, you are not driving; the car is driving unless you have the ability to stop within
milliseconds that might not be possible. It is important to have trust in machines when control is
passed to it. Else, it can lead to finger pointing as in the case of Tesla car collision case.

45
Emanuel; , Quinn;. (2016, December).

37 | P a g e
AI with IQ and EQ

So far, the discourse on AI was around computers or machines acquiring intelligence quotient
comparable or better than human beings. It is proven that computers can do logical stuff. Next
question is whether machines can acquire feelings and emotional intelligence like human beings.
This will be important for machines to do things differently based on their experiences. Machines
with heart and brain can pose a different set of challenges.

Sophia, the social humanoid robot who has stunned the world with her artificial intelligence and
charm, is able to display over 62 facial expressions. Built by Hong Kong-based company Hanson
Robotics, Sophia hides impressive artificial intelligence behind her human-like façade, learning
from each conservation to become ever more human. Sophia became a Saudi Arabian citizen in
October 2017, the first robot to receive citizenship of any country. In November, she was named
the United Nations Development Programme’s first ever Innovation Champion, and the first non-
human to be given any United Nations title46.

Source: Khaleej Times website

46
Khaleej Times. (2018, April 3)

38 | P a g e
Will Smith, who starred in the film adaptation of Isaac Asimov’s collection of science fiction
stories I, Robot, tried his hand at a spot of robot dating the other day on the Cayman Islands. In a
video posted on YouTube, the actor attempted to have a date with Sophia the robot. After refusing
a glass of wine, she said jokes were an “irrational human behavior”.

"Sophia, can I be honest with you? I don't know if it's the island air or the humidity, but you're just
so easy to talk you. You got a clear head, literally," Smith is heard telling Sophia in the video.
However, Sophia interrupts and is quoted as saying, "I think we can be friends. Let's hang out and
get to know each other for a little while."

Although Sophia is early form of machine with emotional intelligence, the episode clearly shows
the possibility for much more evolved forms of AI beings. The co-existence of human and AI
beings can bring hitherto unseen challenges in society.

AI overtaking Human beings

Computers do the logical functions, but can they do sort of the emotional aspects involved with
being human? This will be a pre-requisite for machines to learn from experience and respond to
circumstances effectively. Computers can be trained on how to learn. They in turn teach other
computers how to get smarter and smarter. Machines won’t necessarily integrate the emotional
aspects of human’s lives like love and empathy47. Stephen Hawking48 was worried that once
machines are let loose on the Internet, they could be vastly more intelligent than any group of
humans combined. Who knows at some point in time future, self-learning machines create their
own replicas by combining 3-D printing technology and the all the knowledge that is available on
the net.

47
Kingsman, M. (2017, January 30).
48
Holley, P. (2016, January 16).

39 | P a g e
Once more data available, AI can learn to replicate human emotion, thought and analysis with the
help of improvised algorithms. AI can evolve and then mimic human emotions and then learn
from that. Google assistant has developed ability to contextualize its conversations with human
beings using deep learning algorithms. Smith and Sophia episode also shows great strides in
machines developing its own heart and emotions. Time is not far when computers will surpass
human capabilities. Stephen Hawking’s prediction that the development of full artificial
intelligence could spell the end of the human race can materialize.

A diametrically opposite view is the super intelligent machines won’t be interested in affairs of
the lesser human beings. Only time will tell whether AI is the “the best thing that we've ever done
or our last” 49

AI and citizen rights

Human beings tend to seek rights for things that appear to be like us and to deny rights to things
that don’t50. Sophia the robot, might have special status as Saudi citizen due to novelty. Question
that arise is whether AI at some point in time acquire or rather demand legal rights. If not for
granting rights, the recognition of legal status may be required to bind AI for its actions. The list
of threshold characteristics proposed for a computer to have legal or moral personhood is
exhaustive: the ability to experience pain or suffering, to have intentions or memories, and to
possess moral agency or self-awareness. None of these characteristics is well-defined, though, and
this is especially the case with the most oft-cited of the lot: consciousness. It is most likely that a
machine that has the ability to interact with humans in the world will be the first candidate for
rights. Interesting questions arise regarding personhood for AI

If an entity is aware of life enough to assert its rights by protesting its dissolution, is it entitled to
the protection of the law?

49
Holley, P. (2016, January 16).
50
Friedman, D. (n.d.).

40 | P a g e
Should a self-conscious computer facing the prospect of imminent unplugging have standing to
bring a claim of battery?

One possibility would be to treat A.I. machines as valuable cultural artifacts, to accord them
landmark status with stipulations about their preservation and disassembly.

AI and security

As the AI becomes pervasive, it may also become more vulnerable to hacking and cyber-attacks.
The cyber-security of AI systems therefore critical and requires action at national and international
level.

AI systems itself can be deployed to enhance security. Companies around the globe are investing
heavily in building AI based security solutions be it log review, fraud detection or e-surveillance.
The fear of surveillance can switch users to TOR and more secure systems adding complexity to
enforce security. The flip side is that AI itself can be perpetrator of cybercrimes. A self-mutating
and encrypted malware created and propagated by AI systems is not a challenge a cyber security
wizard would like to crack.

A fragmented security solution will put interoperability and the safety of end-users at risk. A key
challenge will therefore be to set up the necessary governance at global, regional, national and
industry levels involving all main stakeholders, including public authorities (e.g. ministries and
the responsible national security associations) and all value chain players (manufacturers, service
providers and operators).

AI and warfare and terrorism

Stephen Hawking’s worst fears can materialize in two ways. One, a super intelligent AI attack on
human race. Second, a led by humans powered by AI. The military investments in AI systems
indicates that the odds are more in favor of the later. The access to nuclear weapons with the
terrorists and rogue states was considered as the biggest threat for humanity. The technology access
was a great barrier for acquiring nuclear capability. The threat from AI warfare would soon

41 | P a g e
outpower nuclear power. The only ray of hope is developers asserting and restricting the
technology use only for peaceful purpose; like in the case of Google.

AI and responsibility for actions

Who is responsible for AI actions? This would be the primary question to be answered in order to
overcome many of the challenges and issues listed in the previous sections. AI is great promise
and great peril at same time. The driver in Tesla car was quick to point finger at ‘Auto pilot’. Next
chapter is devoted completely to discuss the possibility of evolving a liability framework that can
address this interesting question.

42 | P a g e
AI and Liability
The previous sections looked at benefits and issues associated with AI adoption. AI is certain to
save time and money in human endeavors in different domains. When AI technology replace
human professional judgment, could lead to increased claims raising complex issues of causation,
legal duties, and also liability.

Autonomous (or intelligent) machines present new challenges for our existing models of liability
which are largely causative based. It will be difficult to ascertain whether a machine has behaved
in a particular manner due to its innate complexity or learned behaviours. Attribution of "fault" or
"defect" for liability purposes is very difficult. The law will need to flex, to adapt and accommodate
the new evolution in technology51. This section examines liability issues associated with AI.

Liability

Liability is essentially a sliding scale which is based factually on the degree of consequential legal
responsibility society places on a person. As we will see later on, historically responsibility and
hence liability levels are not static – the able minded, the children and mentally incapable adults
have different levels of liability – the latter having little or no responsibility for their actions and
therefore a commensurate degree of low accountability and liability. There are differences across
time and place. For example, minor’s contractual liability varies in India (Indian Contract Act,
1872) and UK.

Until relatively recently, the question of whether or not a machine should be accountable (and
hence liable) for its actions was a relatively trite one – a machine was merely a tool of the person
using or operating it. There was absolutely no question of a machine assuming a level of personal
accountability or even “personhood” as they were incapable of autonomous or semi-autonomous
action.

51
Buyers, J. (2015, January).

43 | P a g e
AI Liability

The sliding Scale in case of AI systems range from AI as a passive agent or slave to Artificial
personhood. Where you place AI machine in the sliding scale depends on the level of intelligence
acquired by it. At basic level, there are machines that can respond and make limited pre-defined
decisions in response to external stimuli and in accordance with programmed software parameters.
This is the current "state of the art" at the moment. One will hit the other end of the scale once
there are self-aware machines that have the capacity to learn and to make autonomous decisions
that are not directly traceable to their programming52.

To analogize, you can currently get into a Google Car which will (quite effectively) drive you from
A to B and will avoid traffic collisions based on its programming, and inputs from its satellite
navigation systems and radar, but you still cannot have a sensible conversation or argument with
it.

Existing Liability Frameworks

Current liability frameworks that can be referred to fix liability for AI actions can be classified
into following categories.
1. Liability based on Respondeat Superior
2. Vicarious liability or agency theory
3. Strict Liability
a. Tort - negligence
b. Contractual product liability
c. Strict liability provisions from Consumer law
d. Dangerous activities
4. Common enterprise liability
5. Criminal liability

The scope, reach and limitations of each the above will be reviewed in the following sections. The
potential issues in relation to their application to AI systems is also being analysed. Clearly, the

52
Buyers, J. (2015, January).

44 | P a g e
most conventional analysis that can be applied to intelligent or semi-intelligent machines as
complex products.

Tort – Respondeat Superior Liability

The Respondeat Superior (Latin: “let the master answer”) rule is also called the “Master-Servant
Rule.”53 This regulation was established by the praetorian law in ancient Rome. Praetor's Edict
provided for cases in which a claim on obligations arising under transactions of a slave who was
directly involved in commercial activities could have been made against the slaveholder.

Both AI and the slave are not subjects of law, but rather its objects. They could not apply to courts,
because only free persons could participate in litigation. Assuming that the parallel between legal
status of AI and that of slaves is possible, it can be stated that damages caused by the actions of
AI should be compensated by its owner or AI developer or the legal person on whose behalf it
acts. In Roman law, this meant that the person (head of household) responsible for persons alieni
iuris (subordinate slaves), i.e. their owner, was held liable for torts committed by the slaves.

Vicarious Liability / Agency Law


Vicarious liability is the responsibility which renders the defendant liable for the torts committed
by another. The liability is imposed on the person, not because of his own wrongful act, but due to
his relationship with the tortfeasor. By considering robot-as-tool, the liability for the actions of AI
rest with their owners or the users.

If AI acting on behalf of the principal P, negotiate and make a contract with the counterparty C,
the rights and obligations established by AI directly bind P. All the acts of AI are considered as
acts of P. P cannot evade liability by claiming either she did not intend to conclude such a contract
or A made a decisive mistake.

Agency law has developed on the basis of vicarious liability. It provides a suitable framework in
which to find a solution for harms committed by the next generation of intelligent software; an

53
Paulius, C., Grigien, J., & Sirbikyt, G. (2015).

45 | P a g e
agency relationship is formed when the software licensee installs and then executes the software
program. Accordingly, intelligent software agents could be regulated under agency law. A
software licensee will be activating software for some purpose. The intelligent software agent will
then use its learning, mobility and autonomous properties to accomplish specific tasks for the
licensee. Thus, we see the software agent in the legal role of the “agent,” and the software licensee
in the legal role of the “principal.” This relationship of agent/principal has been formed whether
or not the parties themselves intended to create an agency or even think of themselves as agent and
principal54.

In e-commerce domain laws recognizes machines as participants in ordinary consumer


transactions. Article 12 of the United Nations Convention on the Use of Electronic
Communications in International Contracts states that a person (whether a natural person or a legal
entity) on whose behalf a computer was programmed should ultimately be responsible for any
message generated by the machine. This interpretation complies with the general rule that the
principal of a tool is responsible for the results obtained by the use of that tool since the tool has
no independent volition of its own. The sections 10A and section 11(c) of The Information
Technology act, 2000 provide legal validity to electronic contracts in India. The language of
section 11(c) ‘by an information system programmed by or on behalf of the originator to operate
automatically’, is clear that information system can be programmed on behalf of a human being.

Intelligent software agents will be capable of causing harm. Unlike earlier software agents, they
will be capable of finding their own sources of information and making commitments – possible
unauthorized commitments. Once intelligent software agents are viewed as legal agents within an
agency relationship, it follows that liability can be attributed to the actions of the software agents,
binding the software licensee (principal) to legal duties.

The claims involving analogous automated technology can be analyzed to provide a framework
for developing jurisprudence regarding AI technology55. For example, a decision in a consolidated
class action in the District Court for the Eastern District of Missouri found that the use of a
computer program to simulate human interaction could give rise to liability for fraud. In re Ashley

54
Paulius, C., Grigien, J., & Sirbikyt, G. (2015).
55
Croft, J. (2016, October 6).

46 | P a g e
Madison Customer Data Sec. Breach Litig., 148 F. Supp. 3d 1378, 1380 (JPML 2015). The claims
related to a data breach on the infamous Ashley Madison online dating website in 2015 that
resulted in mass dissemination of user information, were allegations that defendants were engaging
in deceptive and fraudulent conduct by creating fake computer “hosts” or “bots,” which were
programmed to generate and send messages to male members under the guise that they were real
women and inducing users to make purchases on the website.56.

There is precedent for litigation over the safety of surgical robots57, with the claims all proceeding
on some form of agency theory, rather than claiming that the robot itself bears liability.

Current laws of agency may not apply once an autonomous machine decides for itself what course
of action it should take, the agency relationship becomes breaks down. A principal is subject to
liability for an agent's actions only when the agent is acting within the scope of employment. Once
AI programs become more adaptive and capable of learning on their own, courts will have to
determine whether such programs can be subject to a unique variant of agency law.

Product Liability

Product liability can be classified into three distinct categories58: Negligence (tort), Contract Law
and, strict liability under consumer protection legislation (in the UK the Consumer Protection Act
1987).

Product Liability through Negligence

Product liability in tort refers to a breach of a duty of care in negligence. Since the seminal case of
Donoghue v. Stevenson, tortious duties can run concurrently with contractual liabilities. The
essence of the case was that if a consumer purchases product in a form intended to reach him or
her without the possibility of reasonable intermediate examination and with the knowledge on the
part of the producer that the absence of reasonable care in the preparation of the product will result

56
Emanuel; , Quinn;. (2016, December).
57
Emanuel; , Quinn;. (2016, December).
58
Buyers, J. (2015, January).

47 | P a g e
in personal injury or property damage, which is reasonably foreseeable, then that producer owes a
duty to take reasonable care in their production. Donoghue v. Stevenson concerned decomposed
snails in ginger beer bottles. It can be easily extrapolated to the analysis of liability in a driverless
car or a surgical robot.

The most obvious theory of tort liability that seems applicable to injuries caused by artificially
intelligent entities is products liability. Product liability is the area of law in which manufacturers,
distributors, suppliers, retailers, and others who make products available to the public are held
responsible for the injuries those products cause. Artificially intelligent entities will presumably
be manufactured by a company, and accordingly the company may be held liable when an AI goes
awry59.

A manufacturer may be held liable under a negligence cause of action when an AI causes an injury
that was reasonably foreseeable to the manufacturer. The typical prima facie negligence claim
requires that an injured plaintiff must show that

(i) the manufacturer owed a duty to the plaintiff


(ii) The manufacturer breached that duty
(iii) the breach was the cause in fact of the plaintiff's injury (actual cause)
(iv) the breach proximately caused the plaintiff's injury and
(v) the plaintiff suffered actual quantifiable injury (damages).

Alternatively, a manufacturer may be strictly liable for injuries caused by its product. Strict
liability does not require a showing of negligence, and accordingly, a manufacturer may be liable
even if it exercised reasonable care. Accordingly, the focus of strict liability will primarily be on
whether the defect in the manufacturer’s product was a cause of the plaintiff’s injury.

Liability framework based on negligence is causative in nature– it is essentially fault based. The
claimant must prove that the defendant owed him or her a duty of care, they failed in that standard
and damage was caused as a result. In contrast to contractual damages, tort-based damages are

59
Friedman, D. (n.d.)

48 | P a g e
awarded on the basis of putting the injured party in the position they would have been had the tort
not occurred.

The scope of potential liability in tort is wide. It could equally apply to manufacturers, producers
and anyone directly involved in the manufacture and distribution of a product with a defect. You
do however need to establish that a duty of care subsists and was breached – and irrespective of
this, the relevant chain of causation is not broken by the damage being too remote.

There are a number of disadvantages with this liability framework. There are very real difficulties
in claiming damages for pure economic loss in tort –the high-water mark of Junior Books v.
Veitchi 1983] 1 AC 520 to be met. There are only a limited number of circumstances where this
is possible, including for example, negligent advice from surveyors.

If it is shown that the claimant should have known of the defect but negligently failed to recognize
it or negligently used the product or failed to take account of its operating instructions,
Contributory negligence can also act as a defense to liability. In such cases, the damages are
reduced to a degree which is commensurate with the claimant’s negligence.

For example,60 in Ferguson v. Bombardier Service Corp., 244 F. App'x 944 (11th Cir. 2007), the
court rejected a manufacturing defect claim against the manufacturer of an autopilot system in a
military cargo plane, when the court found equal credibility in the defense theory that the loading
of the plane was improper, such that a strong gust of wind caused the plane to crash. Even cases
decided almost fifty years ago reflect the current legal analysis concerning the question of liability
for automated technologies. For example, in Nelson v. American Airlines, Inc., 70 Cal. Rptr. 33
(Cal. Ct. App. 1968), the Court applied the doctrine of res ipsa loquitur in finding an inference of
negligence by American Airlines relating to injuries suffered while one of its planes was on
autopilot but ruled that the inference could be rebutted if American Airlines could show that the
autopilot did not cause the accident or that an unpreventable cause triggered the accident.

60
Buyers, J. (2015, January).

49 | P a g e
Volenti non fit injuria – or voluntary assumption of risk is less common in product liability cases
– on the basis that if a claimant knows of the defect they are less likely to use it and if they do, that
usually breaks the causative chain between defect and damage.

Finally, proving liability in tort can be very difficult due to information asymmetry– especially in
product liability cases. Very often the details that are required to show liability are held by the
defendant.

Product Liability through Contract

Contract clearly has a role in determining product-based liability. Contracts ensure that
manufacturers and retailers sell products that meet contractually determined standards. Contract
liability is obviously aimed at the recovery of financial (or pure economic) loss as a result of
breach of these contractual standards, however obviously, as we are all aware, contract liability
can in some circumstances also lead to the recovery of damages for consequential loss and/or
damage.

Contract terms may be either express – as to defects and warranties or implied. In the UK there
are implied terms as to quality, fitness for purpose, title and description in the Sale based on
Consumer Rights Act 2015 (for B2C contracts) and the Goods Act 1979 (for C2C and B2B
contracts). Although there is not a focus on “defects” per se under the Sale of Goods legislation,
there is clearly an emphasis on conformity with description. Arguably that could amount to nearly
the same thing: a failure to conform to a description or specification is very close to a “defect” in
practical terms. Similar legislations exist in India in the form of Sale of Good Act , 1830 and
Consumer Protection Act 1986.

Strengths and weaknesses of contract liability


Contract is a causative based liability framework. In order to found liability, the Claimant must
prove that there was a breach of either an express or an implied term and that that breach caused
the loss. As per Hadley v. Baxendale [1854] EWHC J70, causation is a relevant factor in
determining liability.

50 | P a g e
The primary remedy for breach of contract is damages (as assessed to put the innocent party in the
position they would have been had the contract been correctly performed). The primary advantage
of contract liability is of course that it is open to the contract counterparties to determine the scope
of the contract responsibilities and obligations as between them (and hence the liability) if things
do go wrong. This means that it is quite open to tailor the agreement to the functions and
performance of the AI system involved.

The major disadvantage of contract liability of course is that it is not a liability that applies
generally to the “whole world” but rather is one which is constrained by contract privity. Privity
of contract means that obligations can only be enforced by contract counterparties. There are
limited exceptions in some jurisdictions – such as the Contract (Rights of Third Parties) Act 1999
in the UK). It is possible to conceive of a contract relationship subsisting in your use of an
intelligent systems.

There has also been extensive litigation over the safety of surgical robots61, especially the “da
Vinci” robot manufactured by Intuitive Surgical, Inc. See, e.g., O'Brien v. Intuitive Surgical, Inc.,
No. 10 C 3005, 2011 WL 304079, at *1 (N.D. Ill. Jul. 25, 2011); While manufacturers of medical
equipment and devices can be liable through products liability actions, the learned intermediary
doctrine results in the manufacturer having no duty to the patient and thus prevents plaintiffs from
suing medical device manufacturers directly. See, e.g. Banker v. Hoehn, 278 A.D.2d 720, 721,
718 N.Y.S.2d 438, 440 (2000). This liability structure makes it challenging for patients to win
products liability suits in medical device cases. Same challenge may be faced by users of AI
systems.

Although the court in United States v. Athlone Indus., Inc., 746 F.2d 977, id. at 979 (3d Cir. 1984)
stated that “robots cannot be sued” and discussed instead how the manufacturer of a defective
robotic pitching machine is liable for civil penalties for the machine's defects, it is important to
note that this decision was rendered in 1984.

61
Emanuel; , Quinn;. (2016, December).

51 | P a g e
In a case involving an internet advertising breach of contract claim, the court was asked to resolve
a dispute over the meaning of “impressions,” a key term in Internet advertising. Go2Net, Inc. v. C
I Host, Inc., 115 Wash. App. 73 (2003). The Go2Net Court determined that the parties’ contract
permitted visits by search engines and other “artificial intelligence” agents, as well as human
viewers, in the advertiser’s count of “impressions”.

Cases involving personal injury resulting from automated machines have also been litigated. For
example, cases have involved workers compensation claims or claims against manufacturers by
workers injured by robots on the job. See, e.g., Payne v. ABB Flexible Automation, Inc., 116 F.3d
480, No. 96-2248, 1997 WL 311586, *1-*2 (8th Cir. 1997) (per curiam) (unpublished table
decision); Hills v. Fanuc Robotics Am., Inc., No. 04-2659, 2010 WL 890223, *1, *4 (E.D. La.
2010)

There is a lack of consistency in contractual standards in relation to contracts for the sale of goods
which make the application of the framework complex.

Product Liability through Consumer Law62

Finally, in relation to product liability, UK there is the Consumer Protection Act 1987 (Now,
Consumer Rights Act 2015) which implements the EU directive (85/374/EC) on Liability for
Defective Products. This Act introduces a strict liability regime which does not affect the general
availability of Contract and Tort based remedies. What the Act provides is that a person who is
injured or whose personal property is damaged by a product will be able to claim against a
manufacturer or supplier of that product (and certain other third parties) if it can be shown that the
product was defective. There is no requirement to prove fault on the part of the manufacturer but
obviously there is a requirement on the claimant to show that the defect existed on the
preponderance of the evidence. The Act introduces a consumer expectations test in that a defect
exists where “the safety of the product is not such as persons generally are entitled to expect”
S3(1)10. Consumer expectations themselves are subject to a reasonableness test.

62
Buyers, J. (2015, January).

52 | P a g e
UCLA professor John Villasenor and others argue that product liability could cover any
driverless car accidents.

There are multiple advantages of the consumer protection act. There is no requirement to show
fault; neither is there a privity requirement – the regime itself allows for a wide variety of potential
liability targets, including suppliers and manufacturers. There are still some problems with
consumer protection product liability. Causation still exists – although it is limited to the finding
of defects and moderated by a consumer expectation test. Act is designed to cover claims for real
damage, so does not encompass claims for pure economic loss.

In the context of AI, there are also problems with the definition of “Product” under the Act.
Product is defined as “any goods or electricity and includes products aggregated into other
products, whether as component parts, raw materials or otherwise”. Act is not clear as to whether
software and/or other products of an intellectual type are included in the definition of its scope.
Disembodied Software per se is not treated as a “good” under English law although there is an
argument which might encompass software embedded into functional hardware. The Consumer
protection act, 1986 of India defines customer as the any person who buys any goods for a
consideration S2d(i) or [hires or avails of] any services for a consideration S2d(ii) and hence do
not suffer from this limitation.

There is also the "developmental risks defence", which provides a defense to the manufacturer “if
the scientific and technical knowledge at the time the product was manufactured was not such that
a producer of a similar product might have been able to discover the defect”. This is obviously
highly relevant to our current discussion which inevitably involves the “state of the art” in relation
to machine development.

Section 9 of the Consumer Rights Act 2015 provides that where goods are sold “in the course of a
business” there is an implied term that the goods are of satisfactory quality and fit for a purpose
that the buyer has made known to the seller. Products are therefore of satisfactory quality if they
meet the standard that a reasonable person would regard as satisfactory, taking into account their
description, price and all other relevant circumstances. In other words, it could be argued that

53 | P a g e
contract implied terms create a consumer expectation test. For public policy reasons, there are
higher standards which apply to contracts made with consumers – the Consumer Rights Act 2015
requires that in assessing whether products are of satisfactory quality that account is taken of any
“public statements on the specific characteristics of the goods made about them by the Seller or
the producer”

In the current mixed human and AI driver world applying product liability also has some problems.
For a “product” like an autonomous car, the law groups those possible failures into familiar
categories: design defects, manufacturing defects, information defects, and failures to instruct on
appropriate uses. Complications may arise when product liability claims are directed to failures in
software, as computer code has not generally been considered a “product” but instead is thought
of as a “service,” with cases seeking compensation caused by alleged defective software more
often proceeding as breach of warranty case rather than product liability case63. See, e.g., Motorola
Mobility, Inc. v. Myriad France SAS, 850 F. Supp. 2d 878 (N.D. Ill. 2012) (case alleging defective
software pleaded as a breach of warranty)

The auto manufacturer Toyota was embroiled in a multi-district litigation matter involving
allegations that certain of its vehicles had a software defect that caused the vehicles to accelerate
notwithstanding measures the drivers took to stop. The court denied Toyota’s motion for summary
judgment premised on the grounds that there could be no liability, because the plaintiff and
plaintiff’s experts were unable to identify a precise software design or manufacturing defect,
instead finding that the evidence supported inferences from which a reasonable jury could
conclude that the vehicle continued to accelerate and failed to slow or stop despite the plaintiff’s
application of the brakes.

It difficult draw the line between damages resulting from the AI will, i.e. derived from self-
decision, and damages resulting from product defect; unless we would equate the independent
decision-making (which is a distinctive AI feature) with a defect. If liability is fixed on
manufacturer or programmer for the independent decisions of AI, the burden of responsibility
would be disproportionate. It can lead to fear on the part of the programmer not to reveal his

63
Emanuel; , Quinn;. (2016, December).

54 | P a g e
identity in public, or otherwise it can stop the progress of technology development in official
markets, moving all the programming work into unofficial markets.

Common enterprise Liability

Taking the driverless car as an example, what we can see is that in fact it is an assemblage of many
and varied integrated systems that are produced by multiple manufacturers. For a driverless car to
work effectively, it needs sensors to navigate road obstructions, such as radar and laser detection.
It must have a computer to direct its actions and that computer needs to have a logic framework
within which to operate – internally by use of its own operating software and also externally by
reference to map data. All these systems need to work together effectively, and this is without
consideration of all the usual mechanical components which form a standard car, which must also
be present and functioning64.

This complexity gives rise to a potential plethora of liability targets, ranging from the vehicle
manufacturer itself, all the way down to the designer of an individual component, depending upon
where the actual defect, fault or breach occurs. Existing causative liability models work well
when machine functions (and hence responses) can by and large be traced back to human
design, programming and knowledge. They begin to break down when this cannot be done.

One option is to insist on strict liability (discussed later) for manufacturers of the automated
systems. If there is no strict liability, a court might find itself in uncharted waters and forced to
make a determination as to how best to weigh the comparative liability of AI programs and drivers
in case of autonomous vehicles. The solution suggested by the existing law, while dated, would
hold the vehicle’s manufacturer liable and let the manufacturer seek indemnity or contribution
from other parties, if any, that might be responsible.

Another possibility is to divide responsibility among a group of persons by grafting the Common
Enterprise Doctrine onto a new strict liability regime65. This idea has been raised by David C.
Vladeck who argues that each entity within a set of interrelated legal persons may be held liable

64
Buyers, J. (2015, January).
65
Paulius, C., Grigien, J., & Sirbikyt, G. (2015).

55 | P a g e
jointly and multiply/severally for the actions of other entities that are part of the group. Such
liability theory does not require that the persons function jointly; it would be enough to work
towards a common end such as to design, program, and manufacture an AI.

A “common enterprise” theory might allow the law to impose joint liability, for limited types of
claims, without having to assign every aspect of wrongdoing to one party or another. The
competing interests between manufacturers of various AI components and the end products that
incorporate those components will need to be addressed through contracts and robust
indemnification agreements.

In the field of consumer protection, for instance, the Federal Trade Commission often invokes
the “common enterprise” doctrine to seek joint and several liability among related companies
engaged in fraudulent practices. See, e.g., FTC v. Network Servs. Depot, Inc., 617 F.3d 1127
(9th Cir. 2010); SEC v. R.G. Reynolds Enters., Inc., 952 F.2d 1125 (9th Cir. 1991); FTC v. Tax
Club, Inc., 994 F. Supp. 2d 461 (S.D.N.Y. 2014).

Criminal Liability
CNBC reported66 an incident involving online “bots,” where an “automated online shopping bot”
was set up by a Swiss art group, given a weekly allowance of $100 worth of Bitcoin—an online
cryptocurrency—and programmed to purchase random items from the “dark web” where shoppers
can buy illegal/stolen items. In January 2015, the Swiss police confiscated the robot and its illegal
purchases to date, but did not charge the bot or the artists who designed it with any crime. We can
soon expect to see cases of similar ilk emerge in both criminal and civil courtrooms.

If AI can commit crimes, few interesting questions needs to be answered67.


Can society impose criminal liability upon robots?

66
Kharpal, A. (2015, April 21).
67
Friedman, D. (n.d.).

56 | P a g e
How can AI entities fulfill the two requirements of criminal liability (i.e., actus reus and mens
rea)?
If AI is criminally liable, how do you punish an AI robot?

Two elements of criminal liability which need to coincide in the guilty actor for liability to be
made out – Actus Reus – the criminal act or conduct and Mens Rea, the criminal intent or criminal
mind - knowledge or general intent in relation to the conduct element. Both elements must be
present concurrently to impose criminal liability.

The actus reus requirement is expressed mainly by acts or omissions. Sometimes, other factual
elements are required in addition to conduct, such as the specific results of that conduct and the
specific circumstances underlying that conduct

The mens rea requirement has various levels of mental elements. The highest level is expressed
by knowledge, while sometimes it is accompanied by a requirement of intent or specific intention.
Lower levels are expressed by criminal negligence or "recklessness". (a reasonable person should
have known), or by strict liability offenses.

Gabriel Hallevy has proposed that AI entities can fulfill the two requirements of criminal liability
under three possible models of criminal liability68:

(i) the Perpetration-by-Another liability model;


(ii) the Natural-Probable Consequence liability model; and
(iii) he Direct liability model.

The Perpetration-by-Another Liability (PBAL) Model: AI as Innocent Agents

This model does not consider the AI robot as possessing any human attributes. Instead, the PBAL
model considers that AI entities are similar to mentally limited persons, such as children, and
therefore do not have the criminal state of mind to commit an offense.

68
Hallevy, G. (2010).

57 | P a g e
The AI robot is viewed as an intermediary that is used as an instrument, while the party
orchestrating the offense is the real perpetrator (hence the name, perpetration-by-another). The
person controlling the AI, or the perpetrator, is regarded as a principal in the first degree and is
held accountable for the conduct of the innocent agent (the AI). The perpetrator’s liability is
determined on the basis of that conduct and his own mental state.[36] The AI robot is an innocent
agent.

This model would likely be implemented in scenarios where programmers have programmed
an AI to commit an offense, or where a person controlling the AI has commanded it to commit
an offense. This model would not be suitable when the AI robot decides to commit an offense
based on its own accumulated experience or knowledge.

To take a specific example, imagine a sophisticated aircraft that ejects its pilot out of the cockpit,
thereby killing him. The perpetrator could the programmer of AI software who wrote the
programme with the specific mens rea of killing the pilot. Another candidate might be the user of
an AI system, where specifically the user orders the AI to take a particular course of conduct which
would lead to a crime being committed – such as a person who orders his dog to attack a burglar.
The dog commits the assault, but the person who orders is deemed to be the perpetrator.

The Natural-Probable-Consequence Liability (NPCL) Model: Foreseeable


Offenses
This model of criminal liability assumed deep involvement of the programmers or users in the AI
robot’s daily activities, but without any intention of committing an offense via the AI robot. For
instance, one scenario would be when an AI robot commits an offense during the execution of its
daily tasks. This model is based upon the ability of the programmers or users to foresee the
potential commission of offenses; a person might be held accountable for an offense if that
offense is a natural and probable consequence of that person’s conduct.

Natural-probable-consequence liability seems to be legally suitable for situations where an AI


robot committed an offense, but the programmer or user had no knowledge of it, had not intended
it and had not participated in it. The natural-probable-consequence liability model only requires
the programmer or user to be in a mental state of negligence, not more. Programmers or users

58 | P a g e
are not required to know about any forthcoming commission of an offense as a result of their
activity, but are required to know that such an offense is a natural, probable consequence of their
actions.

Liability may be predicated on negligence and would be appropriate in a situation where a


reasonable programmer or user should have foreseen the offense and prevented it from being
committed by the AI robot.

In NPCL model, the AI takes action which is a "natural and probable" consequence of the way it
is programmed. So, going back to our earlier example of the airplane ejecting the pilot, in this
model, the programmer does not need to have a specific intent (or mens rea) to kill the pilot, but
rather a state of criminal negligence – that is to say reckless disregard as to whether the
programming supplied to the AI could lead to the ejection of the pilot – if a reasonable person in
the place of the programmer could have forseen the offence as a natural probable consequence of
the AI's programming. NPCL doctrine is highly problematic doctrine and has been largely
discredited in many US states (as well as comparable jurisdictions, such as the UK)69

The Direct Liability (DL) Model: AI Robots as subject of Criminal Liability

Finally, we have the "direct liability" model which assumes some level of personal responsibility
on the part of the machine for its actions70. In order for this model to work, self-awareness and an
understanding of act and consequence on the part of the AI perpetrator are essential. This is the
scenario where individual status would have to be placed on AI. AI robot fulfills the factual
element (actus reus) and mental element (mens rea), it will be held criminally liable on its own.[46]

The criminal liability of an AI robot does not replace the criminal liability of the programmers or
the users, if criminal liability is imposed on the programmers and users by any other legal
path. Criminal liability is not to be divided, but rather, added; the criminal liability of the AI robot
is imposed in addition to the criminal liability of the human programmer or user.

69
Heyman, M. G. (n.d.).
70
Buyers, J. (2015, January).

59 | P a g e
What kind of defenses are available to AI?

All negative fault elements should be attributable to AI robots. Most of these elements are
expressed by the general defenses in criminal law (e.g., self-defense, necessity, duress, or
intoxication)71.

Where the perpetrator has a reduced ability to distinguish right from wrong, his / her legal liability
owing to a commensurately reduced. The principle of Doli Incapax in relation to infants and the
mentally incapable would apply if machines are not fully capable of understanding the
consequences of their actions.

AI can possibly raise the defense of insanity in relation to a malfunctioning AI algorithm, when its
analytical capabilities have become corrupted as a result of that malfunction? It may also assert a
defense of being under the influence of an intoxicating substance (similar to humans being under
the influence of alcohol or drugs) if its operating system is infected by a malware?

71
Buyers, J. (2015, January).

60 | P a g e
Final question would be, how to punish an individually accountable machine for the crime
committed?

The death penalty might mean deletion for a self-aware machine. Imprisonment might mean
removing the AI from its intended purpose and a criminal fine might be translated into re-using
the machine for a community purpose away from its original role.

AI: The causation challenge

All of the liability frameworks require some element of causation to a greater or lesser degree. It
is easier to deal with a case when defects are traceable, machine decisions that can be traced back
to defective programming; failures to provide correct operating instructions – and incorrect
operation of machines. The challenge arises when “defect” is inexplicable, or an event cannot in
fact be traced back to a defect or a fault or a directly related human error?72

As intelligent machines and AI systems “learn” for themselves, their behaviours are increasingly
less and less directly attributable to human programming. Advanced AI machines will not be acting
on a prescriptive instruction set, but a system of rules that may not have anticipated the precise
circumstances under which the machine should act.

To take the example of our driverless car, what if our vehicle has been programmed to look after
and preserve the safety of its occupants and also to avoid pedestrians at all costs and is placed in
an unavoidable situation where it has to make a decision as to whether to avoid a pedestrian
crossing into its path (and thereby run into a brick wall, injuring or even killing its occupants) or
running over the pedestrian (and thereby saving its occupants). Can any outcome of that decision
be said to be a failure or a defect – even if people are injured or possibly killed as a result?

It is at this relatively new interface where existing product liability frameworks begin to weaken
and break down. There are some partial fixes in the existing liability frameworks. Tort in particular

72
Sobowale, J. (2016, April).

61 | P a g e
provides for the principle of res ipsa loquitur – or the thing speaks for itself. The doctrine is equally
applicable in the US and the UK.

Res ipsa loquitur is useful in dealing with cases where there are multiple successive inexplicable
failures which cannot in themselves be readily explained. It remains to be seen whether the
principles of res ipsa loquitur will be used by modern courts to conclude that the car (or other
automated device), not the driver/operator, is at fault. Defendants will argue that the doctrine
should not apply when it is unreasonable to infer that the accident was caused by a design or
manufacturing defect, or when the accident in question is not one ordinarily seen with design
defects.

A classic example of the application of this was in the US case of Toyota Motor Corporation,
where Toyota found that for no particular reason, many of its high-end Lexus model cars simply
accelerated – despite the intervention of their drivers. Despite much investigation, the cause of
these failures could not be pinpointed. Toyota took the step of settling 400 pending case against
it after an Oklahoma jury applied the doctrine of res ipsa loquitur and awarded the plaintiffs in
that case $3m in damages.

Strict Liability for Dangerous activities

If AI was treated as a greater source of danger and the person on whose behalf it acted was declared
its manager, the person could be held liable without fault. The question is whether AI software
systems can be recognized as a greater source of danger. There are two main theories of the greater
source of danger: that of the object and that of the activities. Under the theory of object, the greater
source of danger is an object of the physical world that cannot be fully controlled by a person. The
theory of activities provides that the greater source of danger is certain types of activities
associated with greater danger to others. Both theories imply greater danger of certain objects to
persons73.

73
Paulius, C., Grigien, J., & Sirbikyt, G. (2015).

62 | P a g e
The greater source of danger is defined as a specific object of the physical world that has specific
properties. That is precisely what AI is, i.e. a specific object characterized by specific properties
inherent only to it. Since AI is able to draw individual conclusions from the gathered, structured,
and generalized information as well as to respond accordingly, it should be accepted that its
activities are hazardous. Accordingly, the AI developer should be held liable for the actions of the
greater source of danger, and, in this case, liability arises without fault.

Liability without fault is based on the theory of risk. The theory is based on the fact that a person
carries out activities that he or she cannot fully control; therefore, a requirement to comply with
the safety regulations would not be reasonable, because even if the person acted safely, the actual
risk of damage would still remain. The activities of AI are risky, and the risk may not always be
prevented by means of safety precautions. For this reason, AI meets the requirements for being
considered a greater source of danger, and the manager of a greater source of danger is required to
assume liability for its actions by insuring AI. In this case, it would be useful to employ the “deep
pocket” theory which is common in the US. The “deep pocket” theory is that a person engaged in
dangerous activities that are profitable and useful to society should compensate for damage caused
to the society from the profit gained. Whether the producer or programmer, the person with a “deep
pocket” must guarantee his hazardous activities through the requirement of a compulsory
insurance of his civil liability.

63 | P a g e
AI Regulation
In this final section, the possible options to address the challenges and issues are elaborated.
Considering the complexities involved, an all comprehensive regulation addressing multitude of
issues is not easy to figure out. It is important to create regulations that are flexible and dynamic
based on existing or newly evolved jurisprudential principles. The flexibility and dynamism are
they key considering the speed of technological changes happening on the ground.

Liability Regime

Emerging with AI technologies is an ever-increasing public concern for the many risks present
where decisions are made by computers. Previous sections elaboratively dealt with the ethical,
security, and regulatory concerns presented by the rapid growth of AI technologies. Policymakers
are being forced to venture into new territories when tasked with drafting legislation. A fine
balancing act is needed to protect the public from inherent dangers of computer judgment replacing
human decision making and at the same time not stifling AI innovation. The rapid development of
AI technology is in tension with the relative snail's pace, and lack of expertise, of state and national
law makers. The laws and regulations for protection of the public from AI technologies will need
to be enacted. The courts would be the first to address these novel legal issues by applying existing
laws and principles innovatively to address the new demands.

The consequences, responsibility and liability for AI actions are essentially scalable. At the one
end (which is where the current state of the art sits), with AI systems as an assemblage of complex
components using existing contract, tort and consumer protection principles to trace liability and
at the other end as self-aware sentient thinking machines that accrue artificial personhood (in some
form) and assume self-responsibility and hence liability for their actions74. The Contract, Tort and
Strict Liability consumer protection laws are effective to a degree in relation to managing these

74
Buyers, J. (2015, January).

64 | P a g e
consequences but those effectively break down where cause and effect cannot be made out. Robots,
and AI technology, have become far more sophisticated and as such courts will continue to grapple
with the question of assessing liability going forward as the use of these AI technologies and
autonomous machines gain mainstream acceptance.

Legislatures and regulatory agencies in the developed world have already been making great
strides to determine how best to attribute fault in such situations. For example, the US states of
Nevada, Florida, California, Michigan and Tennessee and the District of Columbia have all passed
legislation related to autonomous automobiles.

A regime based on enterprise liability, elements of malpractice, products liability, and vicarious
liability, could address these legal challenges while encouraging professionals to purchase and use
these AI systems75. This will prevent the inequities that may arise from courts applying different
theories of liability.

Insurance76

Negligence and breach of contract actions are becoming more and more complex to litigate – as
resources are spent identifying what has (or indeed might) have gone wrong. In particular, the
argument runs that it is better spending the money compensating the victims of accidents and
incidents involving autonomous systems, than it is on expensive lawyers and expert witnesses.

Strict liability insurance model is common place to address motor vehicle accident claims in many
countries. The 1973 Accident Compensation Act in New Zealand is a classic example of such a
system working in practice – of course not directly in relation to AI systems, but rather in
connection with motor vehicle accidents. In New Zealand, road traffic accidents are not litigated,
but rather victim compensation is automatically paid at Government set tariffs and funded by
motor insurance premiums. In India, third party insurance is mandatory under The Motor Vehicles

75
Paulius, C., Grigien, J., & Sirbikyt, G. (2015).
76
Buyers, J. (2015, January).

65 | P a g e
Act 1988. Another potential solution is to extend the scope of Public Liability Insurance Act, 1991
to cover dangerous AI Systems.

So far as research and development is concerned, a strict liability insurance based model will also
incentivize research on new intelligent AI based systems, rather than forcing R&D divisions of
corporations to consider what defensive steps they should be taking to avoid a class action.

Individualization

“Could Artificial Intelligence become a legal person77” another complex question that needs
answer. Today, it only in reels of science fiction movies. Within a span of a decade an answer is
needed in real life situations. Clearly if machines possess a sentient personality of their own then
there is no reason why they cannot directly accrue liability in the same manner in which living
breathing humans accrue it78. One cannot go back in time and accord the historical treatment of
slaves to AI Systems (Respondeat Superior). The description of “Slave” in the 1825 version of
the Louisiana Civil Code at Article 35 – “One who is under the power of his master, and who
belongs to him; so that the master may sell and dispose of his person, of his industry, and of his
labor, without his being able to do anything, have anything, acquire anything, but what must
belong to his master.”

How liability rules will cope with machines when they individuate – that is to say develop distinct
legal personalities and individual identities of their own. The EU-driven RoboLaw project will
promote the development of guidelines governing the operation of robotics, including AI. In its
current scheme of things, AI has no legal personality. Therefore, in litigation for damages, AI may
not be recognized as an entity eligible for the compensation of damages. However, in terms of law,
a situation where damages are not compensated is impossible. The legal system establishes liability
of those responsible for the injury, the so-called “legal cause” of the injury. But if AI is not a legal
entity, who is to compensate for damages caused by it?

77
Paulius, C., Grigien, J., & Sirbikyt, G. (2015).
78
Buyers, J. (2015, January).

66 | P a g e
Licensing Model – Turing Registries

Curtis E.A. Karnow has suggested specifically that for intelligent machines we need to go a step
further and set up what have been termed “Turing Registries” after the great computer pioneer
Alan Turing79. This would work by submitting intelligent machines to a testing and certification
process to quantify the certification based on a spectrum as follows: the higher the intelligence and
autonomy and hence greater consequences for failure, the higher the premium payable to “release”
that machine into the working environment.

The premium payable for certification would be paid by the developer or manufacturer of the AI
entity wanting to deploy the AI into the market. The premiums would fund a “common pool” under
which risks would be paid out. The system could become self-fulfilling if AIs were prohibited
from use without this certification. As has been pointed out, this model is similar, but not identical
to insurance – it does remove causation and proximate cause but also allows for the wilful acts of
AIs – normally something that is excluded by insurance.

Does the growing intelligence of AI robots subject them to legal social control, just as any other
legal entity? If not artificial personhood, are there any other alternatives? MIT research team has
proposed an interesting alternative of building AI with ability to explain the logic behind the
autonomous decisions without disclosing the core algorithm that is the intellectual property of the
AI creators.

AI – Explain the decision

When you rely on AI systems to machines to make increasingly important decisions, there should
be mechanisms of redress when the results turn out to be unacceptable or difficult to understand.

A Harvard University team comprising Finale Doshi-Velez, Mason Kortz, and others explored
the legal issues that AI systems raise, identified key problems, and suggested potential solutions.

79
Karnow, C. E. (1996)

67 | P a g e
The team involved computer scientists, cognitive scientists, and legal scholars. They proposed
the option of making AI systems explain their decisions without revealing trade secrets or
internal algorithm80. This might require additional resources for development of AI systems and
the way it is interrogated.

Source: MIT Technology Review;. (2017, November)

Explanation systems must be separate from AI systems, say the Harvard team. They begin by
defining “explanation.” “When we talk about an explanation for a decision, we generally mean the
reasons or justifications for that particular outcome, rather than a description of the decision-
making process in general,” This is done by laying out the rules the system follows while making
autonomous decision and similar to log files used in other computing systems.

Under U.S. law, explanations are required in a wide variety of situations and in varying levels of
detail. For example, explanations are required in cases of strict liability, divorce, or discrimination;

80
MIT Technology Review;. (2017, November).

68 | P a g e
for administrative decisions; and for judges and juries. Doshi-Velez and co conclude that legally
feasible explanations are possible for AI systems. This is because the explanation for a decision
can be made separately from a description of its inner workings.

AI Regulation – EU

EU is the forefront of developing regulations for most pressing and emerging societal needs. Be
it environmental protection, Privacy or new technologies, EU has taken the lead in establishing
policies, frameworks and regulations in varied spheres. Two such efforts are briefly discussed
here.

The RoboLAW project (full title: Regulating Emerging Robotic Technologies in Europe: Robotics
Facing Law and Ethics) was officially launched in March, 201281. It is funded by the European
Commission for the purposes of investigating ways in which emerging technologies in the field of
bio-robotics (and AI as well) have a bearing on the national and European legal systems,
challenging traditional legal categories and qualifications, posing risks to fundamental rights and
freedoms that have to be considered, and more generally demanding a regulatory ground on which
they can be developed and eventually launched82.

The most important outcome of RoboLaw is a final report containing the “Guidelines on
Regulating Robotics,” which was presented on 22 September 2014. It is addressed to the European
Commission, in order to establish a solid legal framework for the development of robotic
technologies in Europe. The guidelines are meant for use by the European Commission in order to
respond to the ethical and practical concerns regarding the application of emerging technologies.
The Guidelines on Regulating Robotics is the result of cross border discussions, both in the sense
of gathering multiple nationalities and combining multiple scientific disciplines, and of a wide
dissemination activity through workshops, conferences and meetings.

81
Paulius, C., Grigien, J., & Sirbikyt, G. (2015).
82
Palmerini, E. (2010).

69 | P a g e
EU has already established directive for Collaborative Integrated Transport Systems (C-ITS)83.
The ITS Directive 2010/40/EU may be used as the basis to adopt a coherent set of rules at EU level
in order to create a single market for cooperative, connected and automated vehicles. The directive
identifies in its Article 2 priority areas for the development and use of specifications and standards,
among which the area of linking the vehicle with the transport infrastructure is included. The
actions to be taken in this priority area are further detailed in Annex 1 to this Directive and
comprise, among others, the definition of necessary measures to integrate different ITS
applications on an open in-vehicle platform and to further progress the development and
implementation of cooperative (vehicle-vehicle, vehicle-infrastructure, infrastructure-
infrastructure) systems. Article 6 of the same Directive empowers the Commission to adopt
specifications ensuring compatibility, interoperability and continuity for the deployment and
operational use of ITS for other actions to be taken in the priority areas identified in Article 2.
Those specifications should be adopted through a delegated act. In addition, the Commission could
also use the empowerment bestowed upon it in priority area III which relates to ITS road safety
and security applications and which are further detailed in point 4 of Annex I to the ITS Directive.

AI and Cyber Security

AI systems are more vulnerable to hacking and cyber-attacks. The cyber-security of AI Systems
are therefore critical. A common security framework and certificate policy for AI system
deployment to be developed for AI systems in different domains. Such developments depend on
political support and collaboration among all players in the landscape, both public and private.
This would also mean development of related public infrastructure elements (including Public Key
Infrastructure technology) that enable deployment of AI systems in different spheres. A key
challenge will therefore be to set up the necessary governance at international, national and
industry levels involving all main stakeholders, including public authorities, manufacturers,
suppliers and operators.

83
European Commission. (2016, April 30)

70 | P a g e
AI Privacy and data protection safeguards

AI systems strive to solve day to day problems of naturally intelligent beings. While enabling or
augmenting or automating tasks AI systems are bound to collect, access, process and transfer
personal data that identify or identifiable to natural person. The implementation of AI systems
require compliance with the applicable data protection legal framework. These rules lay down
that processing of such data is only lawful if it is based on one of the grounds listed therein, such
as the consent of users. The European Union’s General Data Protection Directive [Directive
95/46/EC (Data Protection Directive)] which is already in force is a good reference point to start
with. In a world where AI systems by its own volition collect, access, process and transfer
information, directives like GDPR also might require re-work. Data protection by design and by
default principles and data protection impact assessments will be of extreme importance in the
basic design and engineering of AI systems. Sector based data protection impact assessment
templates and guidelines can be developed by industry associations

Following actions can be taken by those who manufacture or deploy AI systems which are
handling personally identifiable information to sensitive data.

1. Demonstrate how using personal data usage is essential for AI systems deployment and
how it can improve safety and efficiency of the tasks while ensuring compliance with
data protection and privacy rules
2. Offer transparent terms and conditions to end-users, using clear and plain language in an
intelligible way and in easily accessible forms, enabling them to give their consent for
the processing of their personal data.
3. Work on information campaigns to create the necessary trust among end-users and
achieve public acceptance;

Building Capability
AI might become capable of explaining its decision logic. Similar capability building might be
required in law enforcement and justice administration systems. For example, in a vehicle accident

71 | P a g e
claims case, in instead of cross-examining a driver, one might have to cross-examine an algorithm,
a.k.a. an expert on the system.

Law enforcement professionals, lawyers and judges might have to acquire new skills to deliver
justice in an AI world. It is time to build bridges between so called pure science disciplines and
social science disciplines. An inter-disciplinary approach is needed in education and skill
development including legal discipline.

It is equally important that procedural and evidence laws are also revamped along with changes in
substantial laws. Another area that need attention is building forensic tools to extract legally
acceptable evidence.

International Cooperation

International cooperation in the area of standardization and regulation of AI system will be


critical84. Standardization and regulations can be sector specific. Say for example, common traffic
rules and communication systems are a pre-requisite for autonomous vehicles. The markets are
developing globally and the such systems will have global reach. Public authorities have an interest
in learning from each other and ensuring swift deployment of new technologies. Industry too has
a strong interest in international cooperation, since it is looking for global markets when
developing equipment, services and business models. UNCITRAL equivalent forum.

84
European Commission. (2016, April 30)

72 | P a g e
Conclusion
Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum,
explores in his new book, The Fourth Industrial Revolution, a revolution that is fundamentally
changing the way we live, work and relate to one another. Previous industrial revolutions liberated
humankind from animal power (Agrarian), made mass production possible (Industrial) and
brought digital capabilities to billions of people (Internet)85. This fourth revolution is powered by
Artificial Intelligence.

The definition of AI provides that AI is any artificially created intelligence, i.e. a software system
that simulates human thinking on a computer or other devices, such as: home management systems
integrated into household appliances; robots; autonomous cars; unmanned aerial vehicles, etc. AI
can be defined on the basis of the factor of a thinking human being and in terms of rational
behavior: (i) systems that think and act like a human being; and (ii) systems that think and act
rationally. These factors demonstrate that AI is different from conventional computer algorithms.
These are systems that are able to train themselves (store their personal experience). This unique
feature enables AI to act differently in the same situations, depending on the actions performed
before. This is very similar to the human experience. Cognitive modeling and rational thinking
techniques give more flexibility and allow for creating programs that can “understand,” i.e. that
have traits of a reasonable person (brain activity processes).

As AI technologies, products, systems, and autonomous machines continue to develop and gain
acceptance, the legal claims related to these technologies will also rise. While courts, legislatures,
and regulatory agencies have begun to address the novel legal issues presented, the current legal
framework leaves several areas open for significant development. Parties filing and defending
actions related to AI technology will need to advance creative concepts for addressing issues such
as causation and liability that will surely be at the forefront of any AI-related litigation. And when
novel AI related issues arise with no apparent legal precedent or laws to rely upon, let’s still wait
a bit longer before asking a robot for help.

The ability to accumulate experience and learn from it, as well as the ability to act independently
and make individual decisions, creates preconditions for damage. National and international law

85
Schwab, K. (2016).

73 | P a g e
does not recognize AI as a legal person, which means that AI may not be held personally liable for
damage it causes. For this reason, in the context of AI liability issues, the following principle may
be applied: the general principle in article 12 of the United Nations Convention on the Use of
Electronic Communications in International Contracts, which states that a person (whethera
natural person or a legal entity)on whose behalf a computer was programmed should ultimately be
responsible for any message generated by the machine.

In view of the foregoing, the concept of AI-as-Tool can be applied, which means that strict liability
rules govern the behaviour of that machine, binding the natural or legal person on whose behalf it
acted, regardless of whether such conduct was planned or envisaged. The actors may be producers
of the AI systems machines, the users of AI, the programmers of the software run on such
machines, and their owners.

When an AI system is understood to be a tool, one can apply vicarious or strict liability for the
damage caused by AI. The vicarious liability concept comes from the respondeat superior liability
theory formed in Roman law. That theory entails that responsibility renders the defendant liable
for the torts committed by primitive AI; thus, liability is imposed on the person, not because of his
own wrongful act, but due to his relationship with the tortfeasor AI. In the case of erratic behaviour
on the part of AI, when there is damage to a third party, the person (the AI owner or user) may
claim damages against the AI designer and (or) producer (product liability). However, according
to AI operating principles, based on independent decision making, it would be difficult to establish
a burden of proof in an appropriate manner. Regulatory mandates can ensure that AI systems
having built sub-systems that explain the decision logic for interrogative purpose.

Strict liability arising out of the Actions of AI can be applied for AI treated as ultra-hazardous
activities.. In a product liability case the plaintiff would find it very difficult to prove that the AI
product was defective and especially that the defect existed when AI left its manufacturer or
developer. AI is a self-learning system, so it can be impossible to draw the line between damages
resulting from the will of AI in the process of its (self) learning and the product defect. AI can be
treated as a greater source of danger, and the person on whose behalf it acts as a manager could be
held liable without fault. Thus it would be useful to employ “deep pocket” theory, which means
that a person engaged in dangerous activities that are profitable and useful to the society should
compensate for damage caused to society from the profit gained. A person with a “deep pocket,”

74 | P a g e
whether that is the producer or programmer, is required to insure against civil liability as a
guarantee for their hazardous activities. Additionally, the Common Enterprise Doctrine, adapted
to a new strict liability regime, can be used in this case.

Sector specific regulations, guidelines and standardization by collective and collaborative effort of
public and private stakeholders across geographical boundaries is a must. Opportunity is boundless
in deregulated areas, but the law’s regulations certainly “bound” the opportunity. Any regulatory
steps should carefully evaluate the benefits and possible impact on innovative pursuits.

Further research areas,

1. The analysis in this paper has focused more on liability issues. The possible solutions and
recommendations for other issues can be researched further. For example, IPR issues
2. The paper focused on regulatory efforts and case laws predominantly in the westerns world,
predominantly Europe and USA. Further exploration of regulations in other countries also
can give new insights.
3. The Indian brains power AI development around the world. It is important to identify
changes required in laws and regulations to bound AI development activities so that public
and state interests are not compromised in the process.

75 | P a g e
Bibliography
Buyers, J. (2015, January). Liability Issues in Autonomous and Semi Autonomous Systems. Retrieved from
osborneclarke.com: http://www.osborneclarke.com/media/filer_public/c9/73/c973bc5c-cef0-
4e45-8554-f6f90f396256/itech_law.pdf

Croft, J. (2016, October 6). Artificial intelligence disrupting the business of law. Retrieved from
www.ft.com: https://www.ft.com/content/5d96dd72-83eb-11e6-8897-2359a58ac7a5

de Souza, S. P. (2017, November 16). Transforming the Legal Profession: the Impact and Challenges of
Artificial Intelligence. Retrieved from www.digitalpolicy.org:
http://www.digitalpolicy.org/transforming-legal-profession-impact-challenges-artificial-
intelligence/

Emanuel; , Quinn;. (2016, December). Artificial Intelligence Litigation: Can the Law Keep Pace with The
Rise of the Machines? Retrieved from www.quinnemanuel.com:
https://www.quinnemanuel.com/the-firm/news-events/article-december-2016-artificial-
intelligence-litigation-can-the-law-keep-pace-with-the-rise-of-the-machines/

European Commission. (2016, April 30). A European strategy on Cooperative Intelligent Transport
Systems, a a milestone towards cooperative, connected and automated mobility. Retrieved from
European Commission.

Friedman, D. (n.d.). http://www.daviddfriedman.com/. Retrieved from Artificial Intelligence: Legal


Research:
http://www.daviddfriedman.com/Academic/Course_Pages/21st_century_issues/21st_century_l
aw/ArtificialIntelligence_Cannon_12.htm

Hallevy, G. (2010). The Criminal Liability of Artificial Intelligence Entities - from Science Fiction to Legal
Social Control. Akron Intellectual Property Journal: Vol. 4 : Iss. 2 , Article 1.,
http://ideaexchange.uakron.edu/akronintellectualproperty/vol4/iss2/1/.

Heyman, M. G. (n.d.). The Natural and Probably Consequences Doctrine : A Case Strudy in Failed Law
Reform. Berkeley Journal of Criminal Law, Vol 15, Issue 2.

Holley, P. (2016, January 16). The Washington Post, 20th January 2016 "Why Stephen Hawking believes
the next 100 years may be humanity’s toughest test". Retrieved from The Washington Post:
https://www.washingtonpost.com/news/speaking-of-science/wp/2016/01/20/why-stephen-
hawking-believes-the-next-100-years-may-be-humanitys-toughest-test-
yet/?noredirect=on&utm_term=.f8be9c411acb

Karnow, C. E. (1996). "Liability for Distributed Artificial Intelligences",. Berkeley Technology Law Journal,
Vol 11.1, 147.

Khaleej Times. (2018, April 3). Hollywood star Will Smith 'rejected' by robot Sophia. Retrieved from
Khaleej Times: https://www.khaleejtimes.com/region/saudi-arabia/Hollywood-star-Will-Smith-
rejected-by-robot-Sophia

76 | P a g e
Kharpal, A. (2015, April 21). Robot with $100 bitcoin buys drugs, gets arrested. Retrieved from CNBC:
https://www.cnbc.com/2015/04/21/robot-with-100-bitcoin-buys-drugs-gets-arrested.html

Kingsman, M. (2017, January 30). ZDNet. Retrieved from Artificial Intelligence: Legal, ethical and policy
issues: https://www.zdnet.com/article/artificial-intelligence-legal-ethical-and-policy-issues/

MIT Technology Review;. (2017, November). AI Can Be Made Legally Accountable for Its Decisions.
Retrieved from www.technologyreview.com: https://www.technologyreview.com/s/609495/ai-
can-be-made-legally-accountable-for-its-decisions/

Norton Fullbright. (n.d.). What is Artificial Intelligence? Retrieved from https://www.aitech.law:


https://www.aitech.law/publications/what-is-ai

Palmerini, E. (2010). ‘The interplay between law and technology, or the RoboLaw’ In Law and
Technology. The Challenge of Regulating Technological Development. Pisa: Pisa Universit Press.

Paulius, C., Grigien, J., & Sirbikyt, G. (2015). Liability for damages caused by artificial. Computer law &
security review 31, 3 7 6 - 3 8 9.

Queen Law debate. (n.d.). How will artificial intelligence affect the legal profession in the next decade?
Retrieved from https://law.queensu.ca: https://law.queensu.ca/how-will-artificial-intelligence-
affect-legal-profession-next-decade

Schatsky, David; Muraskin, Craig; Gurumurthy, Raghu. (2014, November 4). Demystifying artificial
intelligence - What business leaders need to know about cognitive technologies. Retrieved from
www2.deloitte.com: https://www2.deloitte.com/insights/us/en/focus/cognitive-
technologies/what-is-cognitive-technology.html

Schwab, K. (2016). The Fourth Industrial REvolution. Geneva: World Economic Forum.

Sobowale, J. (2016, April). How artificial intelligence is transforming the legal profession. Retrieved from
ABA Journal:
http://www.abajournal.com/magazine/article/how_artificial_intelligence_is_transforming_the_
legal_profession?icn=most_read

77 | P a g e

Anda mungkin juga menyukai