Assignment#1
Submitted To:
Submitted By:
Aamir Sohail
Reg No:
15-CE-019
Dated:20-FEB-2019
HISTORY OF ARTIFACIAL INTELLIGENCE:
The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of
artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of
modern AI were planted by classical philosophers who attempted to describe the process of
human thinking as the mechanical manipulation of symbols[citation needed]. This work culminated in
the invention of the programmable digital computer in the 1940s, a machine based on the
abstract essence of mathematical reasoning. This device and the ideas behind it inspired a
handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth
College during the summer of 1956. Those who attended would become the leaders of AI
research for decades. Many of them predicted that a machine as intelligent as a human being
would exist in no more than a generation and they were given millions of dollars to make this
vision come true.
Eventually, it became obvious that they had grossly underestimated the difficulty of the project.
In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress,
the U.S. and British Governments stopped funding undirected research into artificial intelligence,
and the difficult years that followed would later be known as an "AI winter". Seven years later, a
visionary initiative by the Japanese Governmentinspired governments and industry to provide AI
with billions of dollars, but by the late 80s the investors became disillusioned by the absence of
the needed computer power (hardware) and withdrew funding again.
Investment and interest in AI boomed in the first decades of the 21st century, when machine
learning was successfully applied to many problems in academia and industry due to the
presence of powerful computer hardware.
Formal reasoning
Artificial intelligence is based on the assumption that the process of human thought can be
mechanized. The study of mechanical—or "formal"—reasoning has a long
history. Chinese, Indian and Greek philosophers all developed structured methods of formal
deduction in the first millennium BCE. Their ideas were developed over the centuries by
philosophers such as Aristotle (who gave a formal analysis of
the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwārizmī (who
developed algebra and gave his name to "algorithm") and European scholastic philosophers such
as William of Ockham and Duns Scotus. In the 17th century, Leibniz, Thomas Hobbes and René
Descartes explored the possibility that all rational thought could be made as systematic as
algebra or geometry Hobbes famously wrote in Leviathan: "reason is nothing but
reckoning" Leibniz envisioned a universal language of reasoning (his characteristica
universalis) which would reduce argumentation to calculation, so that "there would be no more
need of disputation between two philosophers than between two accountants. For it would
suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as
witness, if they liked): Let us calculate."[18] These philosophers had begun to articulate
the physical symbol system hypothesis that would become the guiding faith of AI research.
Game AI
In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher
Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.Arthur Samuel's
checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to
challenge a respectable amateur Game AI would continue to be used as a measure of progress in AI
throughout its history.
Reasoning as search
Many early AI programs used the same basic algorithm. To achieve some goal (like winning a
game or proving a theorem), they proceeded step by step towards it (by making a move or a
deduction) as if searching through a maze, backtracking whenever they reached a dead end. This
paradigm was called "reasoning as search"
The principal difficulty was that, for many problems, the number of possible paths through the
"maze" was simply astronomical (a situation known as a "combinatorial explosion").
Researchers would reduce the search space by using heuristics or "rules of thumb" that would
eliminate those paths that were unlikely to lead to a solution
Newell and Simon tried to capture a general version of this algorithm in a program called the
"General Problem Solver".Other "searching" programs were able to accomplish impressive tasks
like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem
Prover (1958) and SAINT, written by Minsky's student James Slagle(1961).Other programs
searched through goals and subgoals to plan actions, like the STRIPS system developed
at Stanford to control the behavior of their robot Shakey.
Natural language
The second AI Winter came in the later 80s and early 90s after a series of financial setbacks. The
fall of expert systems and hardware companies who suffered through desktop computers built by
Apple and IBM led again to decreasing AI interest, on the one hand side from publicity and on
the other side from investors.
Deep Blue
After many ups and downs Deep Blue became the first chess computer to beat a world chess
champions, Garry Kasparov. On 11 May 1997 IBM’s chess computer defeated Garry Kasparov
after six games with 3½–2½.
Deep Blue used tree search to calculate up to a maximum of 20 possible moves. It evaluated
positions by a value function mainly written by hand, which was later optimized by analyzing
thousand of games. Deep Blue also contained an opening and endgame library of many
grandmaster games.
In 1997 DeepBlue was the 259th most powerful supercomputer with 11.38 GFLOPS. In
comprising: The most powerful supercomputer in 1997 had 1,068 GFLOPS and today
(December 2017) the most powerful supercomputer has 93,015 GFLOPS.
FLOPS stand for floating-point operations per second and the ‘G’ in GFLOPS stands for Giga.
So the equivalent of 1 GFLOPS are 10⁹ FLOPS.
21st Century: Deep learning, Big Data and Artificial General Intelligence
In the last two decades, Artificial intelligence grow heavily. The AI market (hardware and
software) has reached $8 billion in 2017 and the research firm IDC (International Data
Corporation) predicts that the market will be $47 billion by 2020.
This all is possible through Big data, faster computers and advancements in machine learning
techniques in the last years.
With the usage of Neural Networks complicated tasks like video processing, text analysis and
speech recognition can be tackled now and the solutions which are already existing will become
better in the next years.
APPLICATIONS:
Finance : Banks use artificial intelligence systems to organize operations, invest in stocks, and
manage properties. In August 2001, robots beat humans in a simulated financial trading
competition. Medicine: A medical clinic can use AI systems to organize bed schedules, make a
staff rotation, and provide medical information. AI has also application in fields of cardiology
(CRG), neurology (MRI), embryology (sonography),complex operations of internal organs etc.
Social Sites: Facebook wants to analyze the way people communicate with one another so that
it can add new features to its services.
Robotics: Robots are physical agents that perform tasks by manipulating the physical world .
Robots are manufactured as hardware. The control of robot is AI (in the form of software agent)
that reads data from the sensors decides what to do next and then directs the effectors to act in
the physical world. Heavy Industries : Huge machines involve risk in their manual maintenance
and working. So in becomes necessary part to have an automated AI agent in their operation.
Education: AI researchers have created many tools to solve the most difficult problems in
computer science.