Anda di halaman 1dari 13

ARTIFICIAL

INTELLIGENCE

DEFINATION

Artificial Intelligence (AI) is the area of computer


science focusing on creating machines that can engage
on behaviors that humans consider intelligent. The
ability to create intelligent machines has intrigued
humans since ancient times and today with the advent
of the computer and 50 years of research into AI
programming techniques, the dream of smart machines
is becoming a reality. Researchers are creating systems
which can mimic human thought, understand speech,
beat the best human chess player, and countless other
feats never before possible. Find out how the military is
applying AI logic to its hi-tech systems, and how in the
near future Artificial Intelligence may impact our lives.
INTRODUCTION

Artificial Intelligence, or AI for short, is a


combination of computer science, physiology, and
philosophy. AI is a broad topic, consisting of different
fields, from machine vision to expert systems. The element
that the fields of AI have in common is the creation of
machines that can "think".

Artificial Intelligence has come a long way from its


early roots, driven by dedicated researchers. The
beginnings of

AI reach back before electronics, to philosophers


andmathematicians such as Boole and others theorizing on
principles that were used as the foundation of AI Logic. AI
really began to intrigue researchers with the invention of
the computer in 1943. The technology was finally
available, or so it seemed, to simulate intelligent behavior.
Over the next four decades, despite many stumbling
blocks, AI has grown from a dozen researchers, to
thousands of engineers and specialists; and from programs
capable of playing checkers, to systems designed to
diagnose disease.

AI has always been on the pioneering end of computer


science. Advanced-level computer languages, as well as
computer interfaces and word-processors owe their
existence to the research into artificial intelligence. The
theory and insights brought about by AI research will set
the trend in the future of computing. The products available
today are only bits and pieces of what are soon to follow,
but they are a movement towards the future of artificial
intelligence. The advancements in the quest for artificial
intelligence have, and will continue to affect our jobs, our
education, and our lives.
HISTORY:

In 1956 John McCarthy regarded as the father of AI,


organized a conference to draw the talent and expertise of
others interested in machine intelligence for a month of
brainstorming. He invited them to Vermont for "The
Dartmouth summer research project on artificial
intelligence." From that point on, because of McCarthy, the
field would be known as Artificial intelligence. Although
not a huge success, (explain) the Dartmouth conference did
bring together the founders in AI, and served to lay the
groundwork for the future of AI research.
In late 1955, Newell and Simon developed The Logic
Theorist, considered by many to be the first AI program.
The program, representing each problem as a tree model,
would attempt to solve it by selecting the branch that
would most likely result in the correct conclusion. The
impact that the logic theorist made on both the public and
the field of AI has made it a crucial stepping stone in
developing the AI field.
Although the computer provided the technology
necessary for AI, it was not until the early 1950's that the
link between human intelligence and machines was really
observed. Norbert Wiener was one of the first Americans to
make observations on the principle of feedback theory
feedback theory. The most familiar example of feedback
theory is the thermostat: It controls the temperature of an
environment by gathering the actual temperature of the
house, comparing it to the desired temperature, and
responding by turning the heat up or down. What was so
important about his research into feedback loops was that
Wiener theorized that all intelligent behavior was the result
of feedback mechanisms. Mechanisms that could possibly
be simulated by machines. This discovery influenced much
of early development of AI.
In late 1955, Newell and Simon developed The Logic
Theorist, considered by many to be the first AI program.
The program, representing each problem as a tree model,
would attempt to solve it by selecting the branch that
would most likely result in the correct conclusion. The
impact that the logic theorist made on both the public and
the field of AI has made it a crucial stepping stone in
developing the AI field.
APPROACHES

There is no established unifying theory or paradigm


that guides AI research. Researchers disagree about many
issues. A few of the most long standing questions that have
remained unanswered are these: should artificial
intelligence simulate natural intelligence, by studying
psychology or neurology? Or is human biology as
irrelevant to AI research as bird biology is to aeronautical
engineering? Can intelligent behavior be described using
simple, elegant principles (such as logic or optimization)?
Or does it necessarily require solving a large number of
completely unrelated problems? Can intelligence be
reproduced using high-level symbols, similar to words and
ideas? Or does it require "sub-symbolic" processing?

Cybernetics and brain simulation:


There is no consensus on how closely the brain should
be simulated. In the 1940s and 1950s, a number of
researchers explored the connection between neurology,
information theory, and cybernetics. Some of them built
machines that used electronic networks to exhibit
rudimentary intelligence, such as W. Grey Walter's turtles
and the Johns Hopkins Beast. Many of these researchers
gathered for meetings of the Teleological Society at
Princeton University and the Ratio Club in England. By
1960, this approach was largely abandoned,
although elements of it would be revived in the 1980s.

Symbolic:
When access to digital computers became possible in
the middle 1950s, AI research began toexplore the
possibility that human intelligence could be reduced to
symbol manipulation. Theresearch was centered in three
institutions: CMU, Stanford and MIT, and each one
developed its own style of research. John Haugeland
named these approaches to AI "good old fashioned AI" or
"GOFAI".

Cognitive simulation:
Economist Herbert Simon and Allen Newell studied
human problem solving skills and attempted to formalize
them, and their work laid the foundations of the field of
artificial intelligence, as well as cognitive science,
operations research and management science. Their
research team used the results of psychological
experiments to develop programs that simulated the
techniques that people used to solve problems. This
tradition, centered at Carnegie Mellon University would
eventually culminate in the development of the Soar
architecture in the middle 80s.

Logic based
Unlike New ELL and Simon, John McCarthy felt that
machines did not need to simulate human thought, but
should instead try to find the essence of abstract reasoning
and problem solving, regardless of whether people used the
same algorithms. His laboratory at Stanford (SAIL)
focused on using formal logic to solve a wide variety of
problems, including knowledge representation, planning
and learning.

Anti-logic or Scruffy
Researchers atM IT (such as Marvin Minsky and
Seymour Papert) found that solving difficult problems in
vision and natural language processing required ad-hoc
solutions – they argued that there was no simple and
general principle (like logic) that would capture all the
aspects of intelligent behavior. Roger Schank described
their "anti-logic" approaches as "scruffy" (as opposed to
the "neat" paradigms at CMU andS tanford).Commonsense
knowledge bases (such as Doug Lenat's Cyc) are an
example of "scruffy" AI, since they must be built by hand,
one complicated concept at a time.

Knowledge based
When computers with large memories became
available around 1970, researchers from all three traditions
began to buildknow ledge into AI applications. This
"knowledge revolution" led to the development and
deployment of expert systems (introduced by Edward
Feigenbaum), the first truly successful form of AI software.
The knowledge revolution was also driven by the
realization that enormous amounts of knowledge would be
required by many simple
APPLICATION

Artificial Intelligence in the form of expert systems


and neural networks have applications in every field of
human endeavor. They combine precision and
computational power with pure logic, to solve problems
and reduce error in operation. Already, robot expert
systems are taking over many jobs in industries that are
dangerous for or beyond human ability. Some of the
applications divided by domains are

Heavy Industries and Space:

Robotics and cybernetics have taken a leap combined


with artificially intelligent expert systems. An entire
manufacturing process is now totally automated, controlled
and maintained by a computer system in car manufacture,
machine tool production, computer chip production and
almost every high-tech process. They carry out dangerous
tasks like handling hazardous radioactive materials.
Robotic pilots carry out complex maneuvering techniques
of unmanned spacecrafts sent in space. Japan is the leading
country in the world in terms of robotics research and use.

Finance:

Banks use intelligent software applications to screen


and analyze financial data. Softwares that can predict
trends in the stock market have been created which have
been known to beat humans in predictive power.

Computer Science:
Researchers in quest of artificial intelligence have
created spin offs like dynamic programming, object
oriented programming, symbolic programming, intelligent
storage management systems and many more such tools.
The primary goal of creating an artificial intelligence still
remains a distant dream but people are getting an idea of
the ultimate path which could lead to it.

Aviation:

Air lines use expert systems in planes to monitor


atmospheric conditions and system status. The plane can be
put on auto pilot once a course is set for the destination.

Weather Forecast:
Neural networks are used for predicting weather
conditions. Previous data is fed to a neural network which
learns the pattern and uses that knowledge to predict
weather patterns.

CONCLUSION:

In its short existence, AI has increased


understanding of the nature of intelligence and
provided an impressive array of application in a
wide range of areas. It has sharpened
understanding of human reasoning and of the
nature of intelligence in general. At the same time,
it has revealed the complexity of modeling human
reasoning providing new areas and rich
challenges for the future.

REFRENCES:
* WIKIPEDIA.COM
* A-I.COM
* SCIENCEDIRECT.COM