Problem Statement:
With the advancement of technology over the past few decades,
autonomous mobile robot navigation has become a popular topic for
researchers. Large companies are now moving towards integrating
autonomous mobile robots in their work areas. Despite the
prevalence of indoor robots in industries, creating full sized robots
that can navigate outdoors autonomously is still a great challenge.
Some big companies are now conducting research on driverless cars
that can drive through heavy traffic; notable for this kind of
development is the Googles driverless car designed by Sebastian
Thrun and his team. They have successfully designed the driverless
car and tested in on various states around the US, but the sensors
and systems used were far from being economical.
One of the main problems to this kind of research is the fact that
development of such a system can cost a huge sum of money and
that it can take a long time to develop it. However, many opensource solutions have now become readily available for developers,
cutting budget cost and development time. The OpenCV library,
developed by Intel, is a free for use library of programming functions
for computer vision systems. The Arduino hardware has gained
respect from a huge community online for opening its hardware to
the public. Their designs can be used as a base for controller
boards. With the help of these open-source software and hardware,
development of autonomous mobile robot can be much easier,
faster, and cheaper.
With all the tools needed, the focus of the research will to
implement an algorithm for a robots navigation system on a
dynamically changing environment. Firstly, the robot must be able
to map its surroundings during the first instance the robot has been
introduced to the environment. Successive maps will then be made
each time the robot passes the same area. Then, using a filtering
algorithm each time a new map is obtained, noise and other nonstationary objects will be removed, producing a detailed map of the
environment. To improve its navigation, the robot will then use the
filtered map.
Outlined Methodology:
I.
II.
III.
Computer Vision
a. Road Edge Detection
b. Pedestrian and Moving Object Detection
c. Land Mark Recognition
Incremental Map Building
a. Instantaneous Map Building
i. Static Obstacle Detection
ii. Dynamic Obstacle Detection
b. Static Obstacle Map Building
i. Dynamic Obstacle Removal
Fuzzy Logic Control
a. Path Planning Control
i. Concurrent Localization and Pose Estimation
ii. Destination Calculation and Planning Algorithm
iii. Road Following Algorithm
b. Obstacle Avoidance Control
Gantt Chart:
References:
[1]C. Chung, J. M. Wang, and S. W. Cheng. A vision-based traffic
light detection system at intersections. Journal of Taiwan
Normal University: Mathematics, Science and Technology,
2002, 47(1), 67-86.
[2]D. F. Wolf and G. S. Sukhatme. Mobile robot simultaneous
localization and mapping in dynamic environments. Robotic
Embedded Systems Laboratory, Center for Robotics and
Embedded Systems, Department of Computer Science,
University of Southern California, Los Angeles, CA 90089, USA.
[3]D. Ferguson, M. Darms, C. Urmson, and S. Kolski. Detection,
prediction, and avoidance of dynamic obstacles in urban
environments. Proceedings of the 2008 IEEE Intelligent
Vehicles Symposium, June, 2008, pp. 1149-1154.
[4]H. Durrant-Whyte, Fellow, IEEE, and T. Bailey. Simultaneous
Localisation and Mapping (SLAM): Part I the essential
algorithms. Australian Centre for Field Robotics (ACFR) J04,
The University of Sydney, Sydney NSW 2006, Australia.
[5]J. Andrae-Cetto and A. Sanfeliu. Concurrent map building and
localization on indoor dynamic environments.
Institut de
Rob`otica i Inform`atica Industrial, UPC-CSIC Llorens i Artigas
4-6, Edici U, 2a pl, Barcelona 08028, Spain.
[6]J. J. Leonard, H. F. Durrant-Whyte, and I.J. Cox. Dynamic map
building for an autonomous mobile robot. International Journal
of Robotics Research, 11(4): 8996, 1992.
[7]M. Faisal, R. Hedjar, M. A. Sulaiman. Fuzzy logic navigation and
obstacle avoidance by a mobile robot in an unknown dynamic
environment. International Journal of Advanced Robotic
Systems. October 2012.
[8]M. Siddiqui and G. Medioni. Human pose estimation from a
single view point, real-time range sensor. Institute for Robotics