Anda di halaman 1dari 14

Artificial Intelligence will make

humans an endangered species


(Science fact not fiction)

Living brain cells inside robotic brains are now successful at
learning and making their own decisions
http://www.bbc.co.uk/news/science-environment-
22867070 . We are creating Artificial intelligence (AI-robots
that make their own decisions) which will supersede us and
make human extinction possible. We can see the
beginnings of human versus robot law documented as
governments pre-empt the AI explosion, a UN
spokesperson is publically calling for debates on AI and
human rights groups are concerned that it will soon be out
of human hands. Whilst this discussion is based on easily
available and reputable online sources many people are not
aware or not interested, dismissing AI as sci-fi. However,
many academics, scientists, researchers and roboticists are
taking it seriously. For instance Cambridge University in
England has set up a Centre for the Study of Existential Risk
in regard to AI. The homepage reads:
Many scientists are concerned that
developments in human technology may soon
pose new, extinction-level risks to our species as
a whole. Such dangers have been suggested from
progress in AI, from developments in
biotechnology and artificial life, from
nanotechnology, and from possible extreme
effects of anthropogenic climate change. The
seriousness of these risks is difficult to assess, but
that in itself seems a cause for concern, given
how much is at stake. http://cser.org/
Whilst more people are becoming aware of using their brains to
manoeuvre model helicopters and wheelchairs via EEG
(electroencephalograph) is possible, logically, large aeroplanes and
tanks will be moved in the same way. The AI introduced to us
through science fiction like Terminator draws ever closer yet it
seems to me many of the general population are not concerned.
Perhaps disregarding it as pure fantasy or maybe there is an
acceptance that progress which cannot currently be conceived will
evolve and few are thinking of the effects. We need to take
seriously the possibility that there might be a Pandoras box says
Huw Price, the Bertrand Russell Professor of Philosophy and one of
CSERs three founders. See more at:
http://www.cam.ac.uk/research/news/humanitys-last-invention-
and-our-uncertain-future#sthash.cEpwMmiX.dpuf.
The robo sapien (dubbed by the New York Times)
(http://www.nytimes.com/2013/07/12/science/modest-debut-of-
atlas-may-foreshadow-age-of-robo-
sapiens.html?ref=defenseadvancedresearchprojectsagency&_r=0)
aka the Terminator has been released by DARPA (Defense
Advanced Research Project Agency) as a to help assist in natural
disasters or nuclear power plant rescues. Some have commented
that it looks like a prototype infantryman
http://news.cnet.com/8301-17938_105-57593396-1/be-afraid-
darpa-unveils-terminator-like-atlas-robot/ and one can see that
DARPA was established for maintaining the technological
superiority of the U.S. military. http://www.darpa.mil/Our_Work/
AI in the military already exists but I wonder how advanced the
technology really is?
Its strange that the Romo robot which uses your iphone as its brain has
more comments and shares online than the articles discussing robots
which are going to completely change the world. Why isnt the rapid
increase in AI spread all over our screens and the front pages of our
newspapers? Perhaps theres not enough interest? The Machine
Intelligence Research Institute (MIRI) also warns an uncontrolled explosion
in AI could lead to human extinction. It seems to me at its current course
it will be rather like human evolution, apes after all watched us make fire
and become sophisticated communicators leaving them behind. We are at
the dawn of creating robots or humanoids that are capable of succeeding
us and the impact will be on many different levels both economically,
socially, culturally and beyond. Although it may seem like fantasy many
researchers and academics are trying to make provisions and look at risks
so that there is a controlled AI explosion. The MIRI started at the
university of Berkley campus. They consist of a team who have gained
qualifications or are employed in renowned environments such as Harvard
University, Oxford University and Google research
(http://intelligence.org/team/).
Robots taking the place of human employment-even nurses,
primary care doctors and lawyers
Soon robots will be the preferred workforce, why rent human
labour when you can own robots? You wont wake up tomorrow
and find the streets are filled with independently thinking robots
but they are being introduced already. These high tech innovations
will be viewed as rather rudimentary someday. They wont just
take over a certain sector like farming in the industrial revolution or
the assemblers in the automobile industry. Your job wont be safe
even if you are a lawyer or journalist and in a predicted 50 years
even a primary care doctor wont be safe. It wont take years of
study for a robot to do a better job, they wont take sick days and
need regular breaks either. They will be cheap and highly skilled
and extremely productive. Capitalist economies have started to find
new innovations that will inevitably lead to dramatic change. AI
most likely, will go beyond our control. The developmental
directions/outcome is dependent on market demands.
Robots are already being used as museum guides, in shopping malls
http://www.bbc.co.uk/news/technology-21847612 as pharmacists and nurses
http://www.telegraph.co.uk/health/healthnews/6448373/Robot-nurses-to-man-wards-in-
development.html In South Korea they are producing childcare robots, but is there a possibility that
they will hug your child too tightly? Will this create developmental disorders? Thats why there is so
much scepticism about robots working with children as well as the elderly people who have
disabilities. Manufacturers are making at home robots similar in stature and gestures to children
because psychologically they are less threatening. South Korea has already tried out robot prison
guards, and three years ago launched a plan to deploy more than 8,000 English-language teachers
in kindergartens US PUPPETS http://www.bbc.co.uk/news/magazine-21623892. Projects like the
one in Bristol Robotics Laboratory are working hard to make robots trustworthy. Many researchers
have growing concerns that once robots make it out of the lab into mass production and their
engineering is common knowledge corners may be cut
http://www.brl.ac.uk/news/robosafetra.aspx.
The New America Foundations podcast will robots steal your job? http://youtu.be/KOoHRircmhM
discusses emerging advances used by businesses and how computers are not just beating the very
best human chess players. The systems in place now used in science and cutting edge technologies
are producing data we cannot understand because we are fundamentally limited systems will
continue to use algorithms they find useful which will soon breakaway from us. Technology is
already superseding us in new ways.

Human devolution and extinction- Will AI breakaway
from us leaving humans behind as we did with apes
What happens when the economy is run by machines? In the not
so long future human jobs are lost in every or almost every sector
how will money be earned? In turn how will we support
ourselves/our families? Will there be any healthcare? Will this
mean we become an endangered species? Think how it might be
to compete for resources with the dominant species, says Huw
Price of CSER. Take gorillas for example the reason they are going
extinct is not because humans are actively hostile towards them,
but because we control the environments in ways that suit us, but
are detrimental to their survival. -
http://www.cam.ac.uk/research/news/humanitys-last-invention-
and-our-uncertain-future#sthash.cEpwMmiX.lcyJDfLj.dpuf Like
gorillas we wont be able to understand a large part of robotic
communications/transfers of data.

At some point, this century or next, we may well be
facing one of the major shifts in human history- when
intelligence escapes the constraints of biologyNature
didnt anticipate us, and we in our turn shouldnt take
AGI (Artificial General Intelligence) for granted
The critical point might come if computers reach
human capacity to write computer programs and
develop their own technologies. This, [Irving John
Jack] Goods intelligence explosion, might be the
point we are left behind - permanently - to a future-
defining AGI. - See more at:
http://www.cam.ac.uk/research/news/humanitys-last-
invention-and-our-uncertain-
future#sthash.cEpwMmiX.dpuf

Is governments pre-empting the AI explosion? The emergence of human
versus robot law
Questions and issues are beginning to be raised. United Nations
human rights special rapporteur Christoph Hayes warns that Lethal
Autonomous Robotics (LAR) should not have the right or ability to
decide whether they should kill humans
(http://youtu.be/LEInsrT8cHU). It surprises me that an interesting
article written by Dr Tony Hirst on June 11th 2013
http://www.open.edu/openlearn/science-maths-
technology/engineering-and-technology/technology/naughty-
robot-wheres-your-human-operator Naughty robot: Wheres your
human operator has no comments at all, over a month later. Dr
Hirst writing for the Open University asks fundamental questions
like: in years to come who should be held responsible for a robot
committing a murder? If robots have responsibilities should they
also have rights? He questions whether we are seeing the first
signs of robot law (that is, self-regulating and independent
decision-making entities) versus human law.
Driverless cars are taking to the roads and although fully
autonomous ones will be too expensive for most right now, auto-
pilot ones are expected to be commonplace. Different US states are
regulating these cars in different ways. Florida and California
consider the operator to be the one who engages the autonomous
vehicle into autopilot, even when a person is not present. The US is
also considering laws around autonomous unmanned vehicles
specifically drones. Whilst a Tennessee Senate bill is specifically
looking at surveillance issues other states are particularly looking at
restricting drones from having weapons because you can kill
someone remotely by using them. Considering the laws on
autonomous cars when a human operator is considered to have
responsibility Dr Hirst questions; Who is responsible when LARs are
make their own lethal force decisions?
In the past it is considered to be the chain of command however:

Traditional command responsibility is only implicated when the
commander knew or should have known that the individual
planned to commit a crime yet he or she failed to take action to
prevent it or did not punish the perpetrator after the fact. It will be
important to establish, inter alia, whether military commanders will
be in a position to understand the complex programming of LARs
sufficiently well to warrant criminal liability.
Dr Hirst also highlights human rights campaigner groups that warn:
There is clearly a strong case for approaching the possible
introduction of LARs (Lethal Automated Robots) with great caution.
If used, they could have far-reaching effects on societal values ...
there is widespread concern that allowing LARs to kill people may
denigrate the value of life itself. ... If left too long to its own devices,
the matter will, quite literally, be taken out of human hands. ...
http://www.hrw.org/sites/default/files/reports/arms1112ForUploa
d_0_0.pdf

What can be done to control the AI explosion?
Technology has aided humans in scientific discovery, prolonged life and helped us drive our
economies forward. At this point though we should be considering the risks that rapidly growing AI
exposes us. Why was there so much concern over cloning a sheep (which dominated headlines)
and yet theres no fierce debating taking place at all levels of society on this subject? Despite
billions being invested in projects it seems there is a lack of discussions. In the future even though
the comparison of robot-human brains could be a likened to that of rat-human brains right now we
have the advantage of being the dominant species, the ability to discuss, debate and prepare. It is
in our nature to fight for survival after all. Currently, living brain cells in robotic brains are merely
learning motor skills rather like an infant, but they are autonomous and continue to teach
themselves. Other robots have the ability to learn, improve and store a vast amount of information
so much so they will break away from us. We have created the building blocks for an AI explosion.
There is a debate over the amount of time it will take before we are entirely superseded and like
any theory the measurements considered are not definite. Some researchers predict 16 years,
others 30-100 and maybe more the truth is we cannot be definite but it is likely it will happen in the
near future.
On a brighter note there is some hope that robots eventually will be able to empathise with
humans and the first steps are starting to be taken by roboticists such as David Hanson. The video
below showcases how realistic looking robots are currently being built. They understand speech,
recognise perception of people and show emotion
http://www.ted.com/talks/david_hanson_robots_that_relate_to_you.html
The hope is that when AI matches human intelligence programmed empathy will be fundamental.

Anda mungkin juga menyukai