We use cookies to provide you with a better onsite experience. By continuing to browse the
site you are agreeing to our use of cookies in accordance with our Cookie Policy.
SHARE LATEST
SUBSCRIBE
E NG I N E E RIN G
In2014SpaceXCEOElonMusktweeted:WorthreadingSuperintelligenceby
Bostrom.WeneedtobesupercarefulwithAI.Potentiallymoredangerousthan
nukes.ThatsameyearUniversityofCambridgecosmologistStephenHawking
toldtheBBC:Thedevelopmentoffullartificialintelligencecouldspelltheendof
thehumanrace.MicrosoftcofounderBillGatesalsocautioned:Iaminthe
campthatisconcernedaboutsuperintelligence.
https://www.scientificamerican.com/article/artificialintelligenceisnotathreatmdashyet/ 1/8
2/23/2017 ArtificialIntelligenceIsNotaThreatYetScientificAmerican
HowtheAIapocalypsemightunfoldwasoutlinedbycomputerscientistEliezer
Yudkowskyinapaperinthe2008bookGlobalCatastrophicRisks:Howlikelyis
SHARE LATEST
itthatAIwillcrosstheentirevastgapfromamoebatovillageidiot,andthenstop
atthelevelofhumangenius?Hisanswer:Itwouldbephysicallypossibleto
buildabrainthatcomputedamilliontimesasfastasahumanbrain....Ifahuman
mindwerethusaccelerated,asubjectiveyearofthinkingwouldbeaccomplished
forevery31physicalsecondsintheoutsideworld,andamillenniumwouldflyby
ineightandahalfhours.Yudkowskythinksthatifwedon'tgetontopofthis
nowitwillbetoolate:TheAIrunsonadifferenttimescalethanyoudobythe
timeyourneuronsfinishthinkingthewordsIshoulddosomethingyouhave
alreadylost.
TheparadigmaticexampleisUniversityofOxfordphilosopherNickBostrom's
thoughtexperimentofthesocalledpaperclipmaximizerpresentedinhis
Superintelligencebook:AnAIisdesignedtomakepaperclips,andafterrunning
throughitsinitialsupplyofrawmaterials,itutilizesanyavailableatomsthat
happentobewithinitsreach,includinghumans.Ashedescribedina2003paper,
fromthereitstartstransformingfirstallofearthandthenincreasingportionsof
spaceintopaperclipmanufacturingfacilities.Beforelong,theentireuniverseis
madeupofpaperclipsandpaperclipmakers.
I'mskeptical.First,allsuchdoomsdayscenariosinvolvealongsequenceofifthen
contingencies,afailureofwhichatanypointwouldnegatetheapocalypse.
UniversityofWestEnglandBristolprofessorofelectricalengineeringAlan
Winfieldputitthiswayina2014article:Ifwesucceedinbuildinghuman
equivalentAIandifthatAIacquiresafullunderstandingofhowitworks,andifit
thensucceedsinimprovingitselftoproducesuperintelligentAI,andifthat
superAI,accidentallyormaliciously,startstoconsumeresources,andifwefailto
pulltheplug,then,yes,wemaywellhaveaproblem.Therisk,whilenot
impossible,isimprobable.
Second,thedevelopmentofAIhasbeenmuchslowerthanpredicted,allowing
timetobuildinchecksateachstage.AsGoogleexecutivechairmanEricSchmidt
saidinresponsetoMuskandHawking:Don'tyouthinkhumanswouldnotice
thishappening?Anddon'tyouthinkhumanswouldthengoaboutturningthese
computersoff?Google'sownDeepMindhasdevelopedtheconceptofanAIoff
switch,playfullydescribedasabigredbuttontobepushedintheeventofan
https://www.scientificamerican.com/article/artificialintelligenceisnotathreatmdashyet/ 2/8
2/23/2017 ArtificialIntelligenceIsNotaThreatYetScientificAmerican
attemptedAItakeover.AsBaiduvicepresidentAndrewNgputit(inajabat
Musk),itwouldbelikeworryingaboutoverpopulationonMarswhenwehave
notevensetfootontheplanetyet.
SHARE LATEST
Third,AIdoomsdayscenariosareoftenpredicatedonafalseanalogybetween
naturalintelligenceandartificialintelligence.AsHarvardUniversity
experimentalpsychologistStevenPinkerelucidatedinhisanswertothe2015
Edge.orgAnnualQuestionWhatDoYouThinkaboutMachinesThatThink?:AI
dystopiasprojectaparochialalphamalepsychologyontotheconceptof
intelligence.Theyassumethatsuperhumanlyintelligentrobotswoulddevelop
goalslikedeposingtheirmastersortakingovertheworld.Itisequallypossible,
Pinkersuggests,thatartificialintelligencewillnaturallydevelopalongfemale
lines:fullycapableofsolvingproblems,butwithnodesiretoannihilateinnocents
ordominatethecivilization.
Fourth,theimplicationthatcomputerswillwanttodosomething(likeconvert
theworldintopaperclips)meansAIhasemotions,butassciencewriterMichael
Chorostnotes,theminuteanA.I.wantsanything,itwillliveinauniversewith
rewardsandpunishmentsincludingpunishmentsfromusforbehavingbadly.
Giventhezeropercenthistoricalsuccessrateofapocalypticpredictions,coupled
withtheincrementallygradualdevelopmentofAIoverthedecades,wehave
plentyoftimetobuildinfailsafesystemstopreventanysuchAIapocalypse.
Thisarticlewasoriginallypublishedwiththetitle"ApocalypseAI"
A B OU T T H E A U T H OR(S)
Michael Shermer
https://www.scientificamerican.com/article/artificialintelligenceisnotathreatmdashyet/ 3/8