Anda di halaman 1dari 2

Matt Bulen

9-24-16
Ted Talks 2:
Title: What happens when our computers get smarter then we are?
Presenter: Nick Bostrom
Summary
In this Ted talk, Nick Bostrom talks about potential risks of artificial super intelligence.
Artificial intelligence (AI) has made great leaps in the twenty first century. Nick estimates that
there is a 50% chance AI will be created by the year 2050, and a 90% chance by 2090.
AI is a source code that will learn and build on itself similar to a humans learning
capability, just at a faster rate. Right now we have algorithms that are capable of learning, but
only in a controlled environment. For example, someone wrote an AI code to play the video
game Super Mario. The AI learned and got better as the game went on and ended up mastering
the game in the end. The only restriction to this is that it is written for that specific game. When
AI gets to a point where it can live and learn in the same reality as us, extra precaution needs to
be taken in order to stay safe. The gist of his speech was that AI will eventually learn at a super
fast rate, and when we give it instructions, it will learn that humans might get in the way of it
completing its task. If we try to control it, it will learn to free itself.
The only way to make this a safe thing would be to create an AI to learn what humans
value, and leverage the intelligence on our side. So as a society, we need to make sure when
someone finally creates a working AI, that the second step is carried out.
Reflection

-This topic is something that has always been very interesting to me. I know that we are very far
from a working AI, it may not even be possible. But if it happens, then the human and computer
will look very similar to man and ape. It will learn at a faster pace, and have a higher capacity for
knowledge, leaving us behind. If we create something that is smarter than us, we will not be able
to control it. Which is a very scary thought.