Anda di halaman 1dari 89

Technical report, IDE1144 , September 13, 2011

Masters Thesis in Intelligent Systems


By:- Hasan, Meqdad Hamdan and Kali, Rahul Raj.
School of Information Science, Computer and Electrical Engineering
Halmstad University
Method for Autonomous
Picking of Paper Reels
Method for Autonomous Picking of
Paper Reels
By:- Hasan, Meqdad Hamdan and Kali, Rahul Raj.
Halmstad University
Project Report IDE1144
Masters thesis in Intelligent Systems, 15 ECTS credits
Supervisor: Ph.D. Bjorn

Astrand
Examiner: Prof. Antanas Verikas
September 13, 2011
Department of Intelligent systems
School of Information Science, Computer and Electrical Engineering
Halmstad University
Preface
Here in a short words were words never could describe the feeling of thankful
for peoples who encourage us in this work and keep us on the track when our
energy drops to zero. To the people who illuminate the road for us when our
ideas interned a situation of chaos. Thanks to our parents, grand parents and
our brothers who were with us over seas by their hearts and their praying.
Thanks to Dr. Bj orn

Astrand who guide us during this thesis and track us
on every turn of it. Thanks to the help that is provided by Dr. Walid Taha
from Information and computer department and by Nadya Karginova from
Intelligent systems department. Finally thanks a lot for the people who help
us with little words and they are in other lands nowadays.
i
ii
Abstract
Autonomous forklift handling systems is one of the most interesting
research in the last decades. While research elds such as path planning
and map building are taking the most signicant work for other type of
autonomous vehicles, detecting objects that need to move and picking
it up becomes one of the most important research elds in autonomous
forklifts eld.
We in this research had provided an algorithm for detecting paper reels
accurate position in paper reels warehouses giving a map of the ware-
house itself. Another algorithm is provided for giving the priority of
papers that want to be picked up. Finally two algorithms for choos-
ing the most appropriate direction for picking the target reel and for
choosing the safest path to reach the target reel without damage it are
provided.
While working on the last two algorithms shows very nice results, build-
ing map for unknown stake of papers by accumulating maps over time
still tricky. In the following pages we will go in detail by the steps
that we followed to provide these algorithms started from giving an over
view to the problem background and moving through the method that
we used or we developed and ending by result and the conclusion that
we got from this work.
iii
iv
Contents
1 Introduction 1
1.1 Malta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Outlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Background 7
2.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Closely Related Works . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Method of autonomous material handling . . . . . . . 12
2.3.2 Perception and detection . . . . . . . . . . . . . . . . 12
2.3.3 Map building . . . . . . . . . . . . . . . . . . . . . . . 13
3 Methods 15
3.1 Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.1 Kinematic model . . . . . . . . . . . . . . . . . . . . . 15
3.1.2 Path selection . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.1 LRF simulation . . . . . . . . . . . . . . . . . . . . . . 20
3.2.2 Error simulation . . . . . . . . . . . . . . . . . . . . . 20
3.2.3 Segmentation and Feature Extraction . . . . . . . . . . 21
3.3 Map Building . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.1 Matching algorithm . . . . . . . . . . . . . . . . . . . . 26
3.3.2 Map updating and map generation . . . . . . . . . . . 30
3.4 Paper Reel Selection . . . . . . . . . . . . . . . . . . . . . . . 31
3.4.1 Paper availability and reel priority . . . . . . . . . . . 31
3.4.2 What is the grasping direction? . . . . . . . . . . . . . 36
v
4 Results 37
4.1 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2 Extracting Map . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.3 Sum Maps into one . . . . . . . . . . . . . . . . . . . . . . . . 46
4.4 Map analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5 Discussion 63
6 Conclusions 65
Appendix 69
vi
List of Figures
1.1 Oil drum picking is one of the application examples . . . . . . . . 1
1.2 Paper reels are a second application example . . . . . . . . . . . . 1
1.3 Truck Used in MALTA Project . . . . . . . . . . . . . . . . . . . 3
1.4 Wooden pallets have specic direction for picking even some times
its a unique. The picking direction is perpendicular to the top
wooden plates, the opposite direction of the red row. . . . . . . . 3
1.5 Loading Unknown Position Paper Reel. . . . . . . . . . . . . . . 4
2.1 Illustration to the problem, (Prescribed environment contain stack
of paper reel needs to transport. 1- Letters show the truck in several
locations (A,B and C). 2- Numbers show dierent paper reel in
dierent location. 3- Arrow refer to several direction to pick. 4-
Bold arrow show the best praxis exist now. 5- Black dots refer to
available directions to pick. 6- Cross sign refer to the reected laser
beams. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Estimation of paper reel locations from uncertain laser range nder
reading. Black circles are estimated by using circle tting algo-
rithms and red ones are the true paper reels location. See the dif-
ference between estimated and true ones. Picture is taken from
[1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1 Simplication of MALTA robot kinematic model used to compute
the position of the vehicle from wheel encoder information (bicycle
model). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 The arrangement of points that chosen to generate the path. Six
points are chosen. P
1
is the original point of the robot frame. P
2
is a point in the middle of the forks. P
3
is a point in a distance
of 1.5d on the robot X-axis. P
6
is the center of the paper reel. P
5
is a point on the surface of the paper reel at the same direction of
picking direction. P
4
is a point on the same direction of P
5
but
with 1.5d distance of the paper reel surface. d refers to the robot
total length. The path is generated by using B-spline cubic function. 17
vii
3.3 The path selection to transport reels from unloading area to the
loading area. The path is described by 9 points. P
1
, P
2
and P
3
as
points (P
4
,P
5
and P
6
) in Figure 3.2. 3 points (P
4
,P
5
and P
6
) to
guide the vehicle through the container entrance. Other 3 points
(P
7
,P
8
and P
9
) to guide the vehicle to the farthest unoccupied cell
in the container. . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Line tting problem. . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5 Circle tting results.

(LRF beam error variance) here is 0.05. . 25


3.6 Weighted linear regression. Note there is high error in calculating
start and end points using equation 3.30.

(LRF beam error


variance) here is 0.05. . . . . . . . . . . . . . . . . . . . . . . . 25
3.7 example of matching created map with predened one . . . . . . . 30
3.8 Each reel has 360 accessibility angles. Instead, It is considered as
interval. Thus each reel has 72 accessibility angels. . . . . . . . . 32
3.9 Each object is considered as an obstacle if the distance between that
object and the reel more than 1.7 of the length of the truck until
the end of the gripers . . . . . . . . . . . . . . . . . . . . . . . . 32
3.10 If the obstacle is some thing like wall, i.e the reel location is close
to wall. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.11 Description of use of built problem to calculate the accessibility
angle. The dots refer to the possible picking direction. The Xs
refer to inaccessible directions. . . . . . . . . . . . . . . . . . . . 35
3.12 Change the presentation of accessibility angles to linear presenta-
tion then remove the inaccessible angles and ag them. . . . . . . 36
4.1 Testing environment . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 The truck shape and the arrangement that is chosen to the LRF
tting on the truck. Both are tted on the same hight. . . . . . . . 37
4.3 Scanning image from simulated LRF. The image shows that the
image is relative to the scanner position . . . . . . . . . . . . . . 39
4.4 Simulated errors in scans . . . . . . . . . . . . . . . . . . . . . . 40
4.5 Scanning points after preprocessing. All point are rotated and
translated to the global map. . . . . . . . . . . . . . . . . . . . . 40
4.6 Extracting features and building local map.

here is 0.2. Good


estimation for line segment in the bottom. . . . . . . . . . . . . . 41
4.7 Extracting features and building local map.

here is 0.2. Note


the bad estimation of line in the left . . . . . . . . . . . . . . . . 42
4.8 The result of building local map from circles and lines. . . . . . . . 43
4.9 Extracting features and building local map. The best estimation for
close paper reels. . . . . . . . . . . . . . . . . . . . . . . . . . . 44
viii
4.10 The summation of two maps into bigger one. The biggest circles
represent the extraction of paper reels in rst map.The smallest
ones represent the extraction of them in the second map. The mid-
dle one show the summation of both by using Kalman lter. . . . . 46
4.11 The result of summation two line segments together.The algorithm
sums the rst two left segments but it does not add to them the
third line. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.12 The adding of two segments when they are verticals. Note that the
result not as expected, where one can expect the result should lay
somewhere between the two smallest lines but what happen, that is
the new longer line segment lay out of both original segments. . . . 48
4.13 Other example of summation of two circle columns or paper reels
from two dierent map using Kalman lter. The black one is the
result of the summation. . . . . . . . . . . . . . . . . . . . . . . 48
4.14 Summation of two local maps into one again. The bottom line
segments are summed correctly together while one segment appeared
in the left. This segment simply transferred to the new map without
change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.15 In this gure two local map are summed together the map from
Figure 4.16 and the map from Figure 4.17. Note the dierence
between them. . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.16 The local map that is extracted from rst LRF at scan image num-
ber 71. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.17 The local map that is extracted from second LRF at scan image
number 71. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.18 The result of building the global map of combining 112 local map,
56 local map from each LRF are built from 56 scan image over 56
position of the truck. . . . . . . . . . . . . . . . . . . . . . . . . 53
4.19 The simulation environment and the initial position of the truck.
Light circles are concrete pillars. Dark circles are paper reels.
Lines are walls and container. . . . . . . . . . . . . . . . . . . . 54
4.20 The paper reels arrangements. . . . . . . . . . . . . . . . . . . . 55
4.21 All papers are processed then the access directions for each paper
reel are determined (the magenta points). . . . . . . . . . . . . . 55
4.22 The paper reel #13 is removed rst then the rest paper reel are
processed again. . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.23 Paper reel number 12 is removed this time, then the map processed
again. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.24 Now paper reel number 26 is removed. Note two dierent and sep-
arate intervals of accessibility direction on paper reel # 11 are ap-
peared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
ix
4.25 The same for paper reel number 11. . . . . . . . . . . . . . . . . 57
4.26 And so on for all other reels until all papers are processed. . . . . 57
4.27 The picture shows picking intervals and access direction. . . . . . 58
4.28 The path is generated by using b-spline function and by giving 10
dierent points to the function as input. The path calculated to
each paper real from each possible picking direction then the short-
est one is chosen. . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.29 All possible paths to the paper reel. . . . . . . . . . . . . . . . . . 59
4.30 The chosen path is the shortest one and the easiest from paths in
gure 4.29. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.31 Selected paths and picking directions for each real. . . . . . . . . . 59
4.32 The chosen picking direction provide safe moving for truck to not
be in conict with any other obstacle . . . . . . . . . . . . . . . . 61
4.33 Also the selected path provide safe reaching to the paper reel. . . . 61
4.34 However, we could not simulate the return back movement where
the rotation of truck direction happen instantaneously not exactly
as in real life. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.1 Ackerman Steering Principle. . . . . . . . . . . . . . . . . . . . 69
6.2 The paper reel in the middle is located between a wall a column. . 71
6.3 Adding two availability together after converting the angles to linear
representation to get the total available directions. . . . . . . . . . 73
x
List of Tables
4.1 A comparison between segmentation results; number of eective
segments and length of segments when three values of noise re-
duction constant are used. . . . . . . . . . . . . . . . . . . . . . 41
4.2 Features extraction from one LRF scan, and building local map
as in g.4.6. L
i
refers to line feature. C
i
refers to circle feature.
Empty place refers to no reading available for this measure because
its not available for that feature. error refers to bad estimation. . . 42
4.3 The uncertainty and location of features in g.4.7. L
i
refers to
line feature. C
i
refers to circle feature. Empty place refers to no
reading available for this measure because its not available for that
feature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.4 The uncertainty and location of features in g.4.8 . . . . . . . . . 44
4.5 The uncertainty and location of features in g.4.9 . . . . . . . . . 45
4.6 Summation of two lines into global map,g.4.11 a case spot light . 47
4.7 Summation of two Vertical lines. Objects in gure 4.12. A case
spot light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.8 result of Summation two pillars as in g.4.13 . . . . . . . . . . . 49
4.9 The result of fuse two local map together using Kalman lter.Map
1
represents some features from Figure 4.16.Map
2
represents the
same of them from Figure 4.17. The last part of the table show
the result of summing them together as appears in Figure 4.15. . . 51
4.10 The position, uncertainty and error of the same paper reels and
column from table 4.9 after accumulating several local map. These
feature are represented in a right place as appears in Figure 4.18. . 51
4.11 The shape of the table that generated from the algorithms described
in this research from analysing a given map for paper locations.
This is only a sample from the simulation environment above. . . . 60
xi
Chapter 1
Introduction
In industry and other elds like mining, material handling and material trans-
portation is one of the most costly activity. The cost of this activity increases
if the material have to pick and transport in some kind of order or arrange-
ment. The cost of order picking is estimated by 55% from warehouse op-
eration expense [7]. However this cost increases in case of damages due to
errors of picking. To avoid such high cost and to avoid other type of prob-
lems, demands for ecient material handling with ecient picking regime
solutions are growing rapidly. Examples for such a kind are, steel wire reels
manufactures, Oil drums handling Figure 1.1, paper mills Figure 1.2 and
other industries. The autonomous mobile robotic solution has a great poten-
tial to oer improvements in terms of costs, safety, eciency, exibility and
availability [1].
Figure 1.1: Oil drum picking is one
of the application examples
Figure 1.2: Paper reels are a second
application example
1
2
1.1 Malta
MALTA project (Multiple Autonomous forklift for Loading and Transp-
ortation Applications) [1] is a project developed in

Orebro University and
Halmstad University in Sweden, together with Danaher Motion S aro, Linde
Material Handling, and Stora Enso Logistics. The research in this project is
to develop a fully autonomous AGV (Automated Guided Vehicle) (forklift)
for loading, unloading and transporting materials in highly dynamic envi-
ronments. The forklift in this project was modied by replacing the two
forks with clamps Figure 1.3. The system was equipped with two laser range
nders for detecting obstacles and paper reels. The system also equipped
with AGV control system comprises a set of hardware and software compo-
nents (PC, IO modules, eld bus controller, rotating laser ranger, etc.). The
control system interfaces the actuators and sensors of the truck through the
already built-in local CAN network [1]. The task of the AGV controller is
to provide tradition functionality (navigate the truck between two location
(initial and goal locations)). A system of spinning laser range nder tted
on the top of the truck canopy and reective markers tted around the ware-
house used to determine the exact location of the truck. The path planning
is done by external PC to determine the path between the current location
and the target location. In the case of picking paper reels the target location
is the paper reel itself. The paper reel can be detected by two laser range
nder tted in front of the truck. The truck has been operated and tested for
several times. As a result of testing the truck able to pick paper reels which
arranges in special way, see Figure 1.5. One can observe from the picture
that truck can pick paper from only one direction. One other thing one can
observe from this paper that truck can not avoid picking paper if there is two
paper reels closed to each other. This cause damage in paper reels. Although
the percentage of damage is small its still huge if it is calculated by money.
1.2 Problem
The problem of material handling is studied and researched in dierent ar-
ticles. [1], [6], [5] are some examples where this topic have been studied.
Any way non of them answer the question; The material have to be picked
in what order? Detection the target (in this research and in [1] is a paper
reel while in both [6] and [5] it is a wooden pallet) is one of the problems.
To detect the target its required to estimate its position very accurate to
avoid hitting it instead of picking it. For wooden pallets which studies in
[5] the order of picking them was according to the top one of them how-
Method for Autonomous Picking of Paper Reels 3
Figure 1.3: Truck Used in MALTA
Project
Figure 1.4: Wooden pallets have spe-
cic direction for picking even some
times its a unique. The picking di-
rection is perpendicular to the top
wooden plates, the opposite direction
of the red row.
ever his work does not provide the operator by any kind of tables about In
what order should pallet be picked? the same in [6]. Both of them did not
meet any problem in dening the direction of picking where wooden pallets
have a special and sometimes a unique direction of picking (Figure 1.4). [1]
who provide MALTA truck to us studies the case of detecting paper reels
(cylindrical object) in 2D environment. However, they did not answer the
previous question. The main problem in cylindrical object handling is that
cylindrical object (the reel) has 360 picking direction. So what is the best
direction of piking. These two questions: For a stack of paper reels
that are needed to be moved from unloading to loading area (say
from warehouse to a container) by using autonomous guided vehi-
cle, what is the order of moving paper reels? What is the grasping
direction for each one of them? were our research interests.
1.3 Goals
The goal of the thesis is to provide MALTA project and such other applica-
tions by algorithm to help the autonomous trucks which handle cylindrical
materials like paper reels and oil drums to decide what is the best picking
direction. Also this thesis aims to provide these applications by algorithm to
decide - for a kind of material arrangements - which piece should pick rst.
However, these two objectives not only what we expect from these thesis.
4
Figure 1.5: Loading Unknown Position Paper Reel.
While we looking forward to solve the problems discussed above, we also
looking forward to provide good algorithm to build and update the truck
map from the sensor data.
We aim in this thesis to reduce number of damages in paper reels by
provide a safe picking algorithm. Also we aim to reduce storage area by
give forklift a good lling algorithm. We also aim to save fuel consumption
and increase truck maintenance period by using good ordering algorithm
for piking case and container loading scenario. For these aims the expected
results are lists of paper reels locations and picking order. These lists should
also contain the grasping direction of each paper real and the optimal location
for each (where it should put?). These results are expected to generate from
predened map, fused data from sensory data and predened map.
1.4 Approach
For these objectives, and to study problem case, a simulation environment on
matlab application is to be built. Firstly, the two objectives are to be studied,
developed and to be tested on a predened map in a static environment. Then
a simulation of laser range nder and environment are to be built. Then the
case of building map from sensor data are to be studied. Actually to make
the problem easy to be studied, the case of dynamic environment where there
are a lot of variables (such as other moving truck in the environment or the
Method for Autonomous Picking of Paper Reels 5
change in number of material pieces (paper reels in this case) by increasing
or decreasing due to other truck activities) is not considered. Hopefully, this
thesis can also provide a good background to other researchers in future to
develop picking function to cover dynamic environment.
To build and test this simulation environment dierent algorithms have to
be used such as segmentation algorithm, line and circle detection and tting
algorithms. Kalman lter is to use to fuse data together, other necessary
method are descried in detail later on in this thesis.
1.5 Outlines
This research is coming in six dierent chapters. Chapter 1 is the introduc-
tion part. In chapter 2 we give a deeper view to the problem studied in this
thesis, the best implementation for it so far and previous work related to this
problem.
Then in chapter 3, we describe the method that we used here. We start
with describing kinematic model of our truck. Then we move to describe
the mathematical models and equations that are used to build the simulator.
Then we describe in some details the algorithms that used in segmentation
and detection problems especially form laser scanner data.Then we describe
how we build a global map using data from dierent laser scanners and how
we update it by time. Then we move to discuss the algorithm that we develop
to analyse the constructed map to nd the available paper reel in the map
and the best picking angle for it. After that we discuss the lling problem
and the algorithm used in loading (lling) scenario.
In chapter 4 we discuss the results. Then in chapter 5 we return back to
our work and try to high light its break points and analyse it. Then nally
in chapter 6 we give our conclusion of what we did and what we did not in
this thesis.
6
Chapter 2
Background
In reality, working inside warehouses or outside, to transport cylindrical ma-
terials like drums and paper reels usually done by forklifts that driven by
humans. In the case of autonomous forklift, transporting materials of any
kind is a very challenging work. Machines (forklifts) are not as cleaver as hu-
man. In such case problems start from machine localization, dead reckoning,
edge detecting, map building, path planning and decision making including
control systems, sensor data understanding and etc. Our work here will con-
sider one of these problems in detail and some of other problems where we
cannot nish our work without considering them. In this chapter we will
start by discussing our problem in more detail then we will move to mention
the best available praxis and we will end by mention the previous related
work to our work in this thesis.
2.1 Problem Description
Let us return back to Figure 1.2 and think a while how could an autonomous
forklift handle this huge number of paper reels. How it can decide which one
of them it has to pick up rst? Also at what elevation?
To give deeper view to the problem and understand it from its all sides
let us take a look to the Figure 2.1. The gure illustrates paper reels stack
in some storage area and an autonomous forklift wants to transport them
one by one to inside the container. As in MALTA forklift the forklift in
the gure has an LRF (Laser Range Finder) to detect paper reels and to
determine their locations. The rst problem appears from the gure, that
the truck can detect dierent papers in dierent locations when it moves.
Note how the truck can detect papers (1,2,3,4 and 6) when it is in location
(A). While it can detect papers (1,2,3,4,5 and 6) when it is in location (B).
7
8 Chapter 2. Background
Figure 2.1: Illustration to the problem, (Prescribed environment contain stack of
paper reel needs to transport. 1- Letters show the truck in several locations (A,B
and C). 2- Numbers show dierent paper reel in dierent location. 3- Arrow refer
to several direction to pick. 4- Bold arrow show the best praxis exist now. 5- Black
dots refer to available directions to pick. 6- Cross sign refer to the reected laser
beams.
Note, that paper 5 is a plus. If the truck in location (C), it can detect only
papers (8,7,6,4 and 3). Note, that papers 7 and 8 appeared for the rst time
here while papers 1 and 2 disappeared in this situation. Note, in all locations
papers in the back cannot be detected. As the truck can not gure whole
stack using only the LRF sensor. So how it can decide which paper reel
should be picked up in the rst? let us assume that truck does not move
in several free paths. Let us assume, like in Figure 1.5, that truck move in
predened and x paths. Even in this situation the question still arises ,
what paper reel should pick in the rst? let us assume the worst situation
where the forklift in location B and it will move directly to pick reel number
5. This will cause damage in the two other reels (4 and 5). Let us also
describe other dangerous situation. Assume that there is no space between
reels, and truck moved to choose one of reels number 3 or 4. In this case
truck will damage 2 and 4 or 3 respectively by its gripers. Of course if the
driver is a human, then reel number 2 will be chosen because o removing
it is easier and will make grasping other easier, also because it is the nearest
Method for Autonomous Picking of Paper Reels 9
one to the container.
Again, assume we have the same dangerous situation as above, where
all reels close to each other, and the decision was to pick reel number 2 for
the same reasons mentioned before. If the driver for this truck is a human
he will come to this reel from a safe direction then he will grasp it. In
autonomous forklift if there is no algorithm to give the forklift a decision
from which direction it will grasp paper reel, exactly like what happen in
MALTA forklift, see Figure 1.5, the truck will move toward reel number 2
in Figure 2.1 with the direction of the bold arrow, and again that is a really
dangerous situation which should be avoid. Note, such situation will damage
reel number 3. The question is if there are a lot of available direction, truck
can come to reel and grasp it from, (Note, the black dots on reels and the
thin arrows.) which one of them is the safer? If they all are safe as each
other, which one the truck should come and grasp reel from?
Figure 2.2: Estimation of paper reel locations from uncertain laser range nder
reading. Black circles are estimated by using circle tting algorithms and red ones
are the true paper reels location. See the dierence between estimated and true
ones. Picture is taken from [1]
Other problem appears when using sensory data. That is sometimes
errors in sensory data, see Figure 2.2, cause the same problem above even if
there is a space between reels. So how could truck know the exact location
of the paper reels? Finally, because o using data from LRF to detect and
gure reels, and that give limited information as we mentioned above, could
10 Chapter 2. Background
it possible there is other paper reel which is more suitable to pick-up rst?
In short, this is a typical example for what could happen and the expected
problem in situation of loading reels. Other kind of problems appear when
we move to think in lling container scenario. The main question appear
here where forklift should put the second paper reel inside the container?
Should it put it behind the rst one or next to it? Other question also comes
to the mind in this scenario when forklift start ll the second line in the
container (where the rst reel of this line should be? Is it exactly next to the
rst one of the rst line or between two reels form the rst line to save the
area of container? How the situation will be when paper reels from dierent
diameters?
These are the problems what we try to make a solution for them in this
research.
2.2 State of the Art
The most recent and the best praxis can found in some short details in [1].
As mentioned above, A. Bouguerra et. al. (2007) created a forklift used
in special case; which is to transport cylindrical materials with several base
diameter from storage area to load in a train wagons, containers or any other
transportation means. This forklift developed to work inside warehouse in a
high dynamic area.
A. Bouguerra et. al. used a modied environment by pre-installed infras-
tructure to solve the problem of localization. As mentioned in chapter one
they used spinning LRF with reectors that is tted around the environment
to determine the exact location of the truck. Due to the high stack of paper
reel in the environment as in 1.2 they found this solution is not so sucient.
Later they suggested two solutions. The rst one was by tting reectors
under bottom of the ceiling and pointing the LRF or a camera to ceiling.
The second solution, they suggested, was by using paper reels themselves as
landmarks in a SLAM process. Also they suggest to address each paper reel
by a unique bar -code.
For the navigation function they used a predened map uploaded to the
truck by operator. This map contains a dened possible drivable paths as a
collection of lines and B-splines. It also contains location of dierent things
like dangerous places (oor drop-down elevation), storage area, loading and
unloading areas and doors of the warehouse. They use an external PC to
generate run time trajectory using cubic B-spline. This function allows them
to provide machine with safe motion and obstacle avoidance functions. How-
ever using the predened map made some limitation (as they said). These
Method for Autonomous Picking of Paper Reels 11
limitations were, the truck not allowed to change its path at runtime. That
means the truck have to stop before generate its new path. This appears in
loading scenario as in Figure 1.5 where the truck has to move until point A
before generate its new path to inside container or toward paper reels.
For perception functions, as said before, they used two LRF to detect
paper reels and other obstacles. Later, after they publish their works, they
add a third LRF in back of truck to detect and track obstacles and reels
in backward motion. However the main function of perception process is to
detect and track paper reels, which means, in 2D, nding circles center and
diameters. They segments LRF data to several segments. They use distance
between points to segment the scan image. Then the circle tting algorithm
is applied and the position and radius are estimated. If the diameter of circle
is outside predened interval the circle is rejected. Others are considered as
paper reels. To estimate location of reels in global coordinate the relative
position provided by previous step is combined with global position of the
truck provided in a previous paragraph. Finally tracker keep a global map
of detected paper reels. So the global position of each paper reel is updated
using Kalman lter.
Since the truck work in a dynamic environment, its so important to keep
a generated map in the previous step updated all the time. So they used
Euclidean distance to associate closest reel in the map to the sensing data. If
there is no reel in the map, a new one is added to that location in the global
map and vise versa.
In loading scenario they used the radial distance to pick the nearest paper
reel to predened point (point of generating path), point A in Figure 1.5. But
they did not consider of picking direction of paper reel.
Finally, we have to say, in there published papers [1] and [2], which we
consider them as our rst references the researchers did not give a deeper
view to their works. These views can achieve by understanding the works in
their references.
2.3 Closely Related Works
Autonomous forklift handling systems is to our opinion one of the most inter-
esting research in the last decades. The robots intelligence is demonstrated
by navigation and exploring area. A little number of researches demonstrate
on the decision of picking and grasping areas. However for the purpose of
this research we divide the previous works to three elds; whole system,
perception and detection and map building and picking problem researches.
12 Chapter 2. Background
2.3.1 Method of autonomous material handling
As we described in section 2.2 above researchers in MALTA project were the
rst who developed a solution to handle cylindrical objects by autonomous
vehicle. However, they are not the rst who deal with forklifts that handling
materials. Garibotto in [5]was one of the earlier researchers in this led.
In his research he, and his partners developed a forklift to handle pallets
autonomously with availability to operate forklift manually. Forklift task,
drivable paths and location of loading and unloading areas was feed to the
robot manually through a portable keypad. Robot localization is solved by
fusion sensory data from three dimension vision with odometry data. They
used H shape landmark xed on the ground to be a suitable reference to
make error correction for robot location. Same solution is used to determine
beginning and ending of pallets loading area. Predened map for working
area or warehouse also is used. For detecting pallets and picking it up they
used 3D vision provided by a video camera but not the same one used to
navigation task.
2.3.2 Perception and detection
Most previous work used mainly two type of perceptions, the wide spread-
ing one is the vision using video camera [5] or omni-directional one. Last
decade and due to the decreasing in cost and computational demands almost
researches and researchers head to use laser range nder (LRF) as the main
environment preceptor for robots [1]- [6] and [14]. However there are several
preceptors used instead of the previous two but the used of them decrease
by the time due to technical problems such as the high noise (Sonar and
infra red sensor) or needing to use more sensors with it to give direction and
distance to the target (infrared sensors). The advance of using the LRF not
only lower cost and low demands in computational power but also because
its dependence on the environment is lower than others, for example the
brightness, humidity and temperature not aect its reading. Beside any one
of these or some time collection of them like in [5]they used other sensors to
measure other robot variables such as speed (encoder) internal temperature,
direction (Odometer) and other variables. While almost of researchers used
data from real laser range nder (LRF) or they do not mention how they
simulate this device.
The data collected from sensors (LRF) in [1],[6], [14] and [16] are clustered
to groups according to the distance between two consecutive points. If the
distance between these two points is more than a predened threshold the
line tting algorithm is applied [6] and [14], the circle tting algorithm applies
Method for Autonomous Picking of Paper Reels 13
is in [1] or one of them is applied in [15], according to the result of comparing
angles between four consecutive points as will be explained in next paragraph.
2.3.3 Map building
Map building, localization and object detection are wide research area and
they studied a lot last decades. The process of map building start from
object detection and object tting. For indoor environment as our system
here, objects usually are described by line segments, arc and circles. Jodo
Xavier [15] one of the researchers who study map building technique for the
indoor environment. In his research he try to cluster data collected from
LRF to groups according to distance between two consecutive points. If the
distance is more than the threshold a group is created. Then he ignored any
group contain less than four point. After that he calculate the angle between
two limits point and any point in the middle. If the angle was between the
value of (90

and 135

) then there are some circle inside these points. He


said that he tuned these values to detect the maximum number of circles.
When the circle detected he apply his algorithm to t circle. To t circle
he drew a line between each two consecutive points, from the middle of this
line he construct a perpendicular line. The intersection point of the two
perpendicular lines is the center of the circle and the distance between this
point and any of the two points is the radius of the circle. If there is more
than one intersection point he consider the mean point as the center then
he calculate the radius. More details about the application return to [15].
However, Xavier used his method to detect moving persons and peoples. So
he said he tuned an inscribed angle between 90

and 135

. From a theoretical
point of view there is a week point in this algorithm. That is the algorithm is
sensitive to the error deviation. For example for large error in data detection
the algorithm will fail. However he presented a method to avoid this weakness
by ignoring from calculation points that lay out of threshold tolerance.
The wide and most common method which is used to t circles from
laser data is the least square method, well known by curve regression. P.
Nuez [10] and his team worked on this algorithm. They proposed a circle
tting algorithm in the Cartesian coordinate space. However, his algorithm
deal with the case when the error in both coordinates X and Y are not
independent. Also he provide a variance matrix associated to the estimated
circle parameter. Indeed, a lot of algorithms used Cartesian coordinate to
t circles using least square method. However, Julian Ryde [12] used least
square method to t data directly from laser scanner reading based on polar
coordinate system because of using that is faster and more accurate. For
more details about his method return back to [12].
14 Chapter 2. Background
For purpose of map buildings Zezhong [16] developed an algorithm depend
on three steps to build a complete map. At the rst step he took the data from
LRF then he segment it according to algorithm named it (adaptive clustering
algorithm). Then he compute line segment correspondence to each cluster
by calculate center point, length and orientation from stochastic variables.
In the second step he built a local map from the segments by arranging
lines counter clockwise. Finally, he built a global map from the dierent
local maps. To update the global map at each laser scan he compared local
map and global map, if some line segments appears in the local map but
not in the global one he insert them to the global map. The nal global
map is the merge of all local maps together. To localize the robot he used
some features in the room as land marks and referred to them as a complete
line segments in the environment map. Then he try to nd the possible
complete line segments in each local map by comparing the length between
landmarks and each line of local map. The maximum likelihood was chose as
the best possible matching. Then the current position is computed according
to this match. After several matching steps in relative position he get exact
relative position for the robot. The heading of the robot was calculated as
the dierence between two vectors. First vector is a vector between center
points of two landmarks in the environment map. The second vector is
the vector between the center points of the correspondence line segments
in the local map. Then the absolute global coordination are computed by
transfer relative position for any of the correspondence line segments to the
global coordination knowing the calculated heading and the coordinates of
the center point of the landmark.
Chapter 3
Methods
In this section we will describe in some details methods that are used to solve
problems in this thesis and hypothesis supposed to deal with it. This section
is divided into several parts as we move in the problem solving. First we will
describe the approach used in path planning and trajectory generator. Then
we will move to explain perception algorithm. After that we will describe
algorithm used for generating a map from the perception data. The nal two
parts will be about the core of this research. They are, rstly, about reel
selection then the problem of loading scenario.
3.1 Path Planning
We start from only one hypothesis. That is, we have predened map for all
unmovable facilities and object in the warehouse. This hypothesis is true
and it is satisfactory to start from. As we mention in section 2.3.1, almost
researches in the eld of indoor autonomous vehicles are assumed that they
have such kind of map. So we said this hypothesis are true. The reason makes
this hypothesis satisfactory is the simplicity of creating this map. Usually
architecture plans for warehouses are found in archive. So it is easy to create
an accurate map from them contains location of main facilities such as walls,
doors, columns, drop-down oors and drivable paths. In this case not so
necessary to provide exact location for each paper reel.
3.1.1 Kinematic model
As we described in chapter 1 and as in Figure 1.5 the MALTA robot is an
ordinary industrial forklift with two modied forks. The wheels (the main
part of the driving system according to this topic) are a stander wheels with
15
16 Chapter 3. Methods
Figure 3.1:
Simplication of MALTA
robot kinematic model
used to compute the po-
sition of the vehicle from
wheel encoder information
(bicycle model).
two degree of freedom, one rotation around the wheel axis and the other
one around an oset steering point. The two front wheels are only with one
degree of freedom without any ability to steering but they have the power to
traction. The two rear wheels have the ability to steering. They are follow
Ackermann steering principle
1
. For simplicity and with assuming that there
is no slip in motion we can simplify the robot model to be as a robot with
only two wheels one steering wheel in the back and one in the front (bicycle
model as in gure 3.1).
Let is the steering angle of the rear wheel in the simplication model
3.1. Let v is the linear velocity of the robot and let L is is the distance between
the rear and front wheels. Then let X and Y be the x and y position of the
robot in the global coordinate and let be the heading of the robot in the
same coordinate and let dT is the sampling period. Then the position of the
global coordinate is
P =
_
_
X
Y

_
_
(3.1)
Then the change in position as a function of v and (kinematic model
of our robot) can be described as follows:
1
For more information about Ackermann steering principle and the method of cal-
culating steering angle for each wheel look to Desmond King-Hele, F.R.S., Erasmus
Darwins Improved Design For Steering Carriages - and Cars. Notes and records of the
Royal Society of London, Vol. 56, No.1 (Jan., 2002) pp (41- 61)
Method for Autonomous Picking of Paper Reels 17
Figure 3.2: The arrangement
of points that chosen to gener-
ate the path. Six points are cho-
sen. P
1
is the original point of
the robot frame. P
2
is a point
in the middle of the forks. P
3
is a point in a distance of 1.5d
on the robot X-axis. P
6
is the
center of the paper reel. P
5
is a
point on the surface of the pa-
per reel at the same direction of
picking direction. P
4
is a point
on the same direction of P
5
but
with 1.5d distance of the paper
reel surface. d refers to the
robot total length. The path is
generated by using B-spline cu-
bic function.
=
v
2L
dT sin(2)
X =
v dT
2
_
cos( +
v
L
dT sin(2)) + cos()
_
Y =
v dT
2
_
sin( +
v
L
dT sin(2)) + sin()
_
(3.2)
For more details about the derivation of the kinematic model see the
appendix.
3.1.2 Path selection
The mission of choosing the robot path here not to plan the path inside the
warehouse and not to avoid obstacles. The main idea about this topic is to
provide the truck with the safest path to reach the paper reel place from the
suitable direction and to put is inside the loading area by a suitable way.
The suitable direction will be explained in section 3.4.2.
To simulate path for this process, a cubic B-spline function is used to
generate the path. The function is done as in [8]. To make the function work
probably, six dierent points should be provided to the function at least.
These points should be chosen to insure soft starting and soft moving for
the truck, while the truck control system have to be designed to follow these
18 Chapter 3. Methods
Figure 3.3: The path se-
lection to transport reels
from unloading area to the
loading area. The path
is described by 9 points.
P
1
, P
2
and P
3
as points
(P
4
,P
5
and P
6
) in Fig-
ure 3.2. 3 points (P
4
,P
5
and P
6
) to guide the ve-
hicle through the container
entrance. Other 3 points
(P
7
,P
8
and P
9
) to guide
the vehicle to the far-
thest unoccupied cell in the
container.
points. These points as in Figure 3.2 rst three points was the origin point
of the truck relative frame, the middle point between the two forks and a
point in front of the truck in the same heading direction at a distance of
truck equal to 1.5 times of the truck length. The last three points are chosen
to be the center of the target paper reel, a point on the circumstance of the
reel at the same piking direction, and a point in the same picking direction
but at 1.5 truck length from the circumstance of the reel. This arrangement
of points insure a straight movement for truck during picking process.
The mission of transporting reels is dened by four main variables, the
reel location, reel picking direction, the loading area coordinates in global
frame and the access of the loading area (container gate). Again the B-
spline function is used to generate path from paper reel to the loading area.
The loading area in simulation environment is described by a matrix contain
the coordinates of the container, (Container here is the loading area), and
the occupied locations inside these coordinates. Each cell of this matrix is a
square cell with side dimension equal to the diameter of the paper reel plus
a safe distance for truck grippers. See Figure 3.3.
The mission is to ll the farthest row starting from the rst empty loca-
tion. The mission above is correct if the orientation of the container as in
Method for Autonomous Picking of Paper Reels 19
Figure 3.3 and Figure 4.1 or the location was in the north side of the map.
The path to the rst empty location of the farthest row of the container grid
is described by 9 points as follows (see Figure 3.3):
Path
loading
=
_

_
P
1
= (C
x
, C
y
)
t
P
2
= (C
x
, C
y
)
t
r
t
e
i
t
P
3
= (C
x
, C
y
)
t
(r + 1.5d)e
i
P
4
= P
5
1.5de
i
P
5
P
6
= P
5
+ 1.5de
i
P
7
= E
cell
(r
t
+ 1.5d)e
i
P
8
= E
cell
r
t
e
i
P
9
= E
cell
+ r
t
e
i
_

_
(3.3)
Where (C
x
, C
y
)
t
is the center of paper reel number t, t = 1, 2, 3...n and n is
the total number of paper reels in the stack. P
5
is a random point on the
container gate where
G
lc
+
w
2
e
i(

2
) P4 G
rc

w
2
e
i(

2
) (3.4)
G
lc
is the gate left coordinate. G
rc
is the gate right coordinate. is the
container orientation. w is the truck total width. d is the truck total length.
E
cell
is the nearest edge of the farthest unoccupied cell in the container. r
t
is the paper reel radius.
t
is the picking direction of the target reels.
The return back path is described by the same way but in reverse order,
and here , r and (C
x
, C
y
) are the parameters of the next paper reel in the
stack.
Path
returning back
= [P
9
, P
8
, P
7
, P
6
, P
5
, P
4
, P
3
, P
2
, P
1
]
T
(3.5)
The only dierence here is that points P
1
, P
2
and P
3
are calculated according
to the location and piking direction of the new paper reel target t + 1.
3.2 Perception
In the simulation environment we used LRF as the main perception device.
The following parts describe the method we used to simulate this device and
the reading of this device. In the third part of this section we will describe
the method we used to cluster and t the data are collected from the LRF.
20 Chapter 3. Methods
3.2.1 LRF simulation
The simulated LRF is SICK S300 which is send and receive 360 beam with
precession of 0.5

and cover 180

. The maximum range of the beam is 80m.


The principle of LRF sensor is simple. By calculating the time of ight of
each beam, it can detect the distance of the object while the object inside
the LRF range. We simulate each beam as a line and the reection point
as the intersection point between the beam line and the nearest object. The
input of the simulator is the location of the LRF in the environment [x,y,]
and the environment map itself. The output of the function is the reection
of each beam with its correspondence direction. The reection is described
by the exact distance between the LRF and the target (object).
Inside the simulator the following steps are made;
The environment map is transferred and rotated clockwise around the
position of the LRF.
All object of the map which are located behind the LRF are removed.
In iterative loop, the intersection between each object ( walls, and other
obstacles as line segments, columns and reels as circles) and each beam
are tested then calculated if there?
For each beam, the distances for all intersections are collected in one
array then the minimum distance are considered as the reection point
of the beam.
Finally, a normal distributed error is added as in the following section.
3.2.2 Error simulation
To make the simulator near to reality two kind of error are added to the
simulator readings. The rst one is the error in the location of robot. We
1
Readings from LRF are taken every sampling periods T which means, sometimes, if
the speed of robot motion is not high enough, one or more readings could be taken in
the same location. Form another hand, if the robot motion are high the gape in robot
positions between any two consecutive reeding will be big. However to solve this problem
an a special algorithm is necessary to make synchronization between Odometer readings
and LRF or other sensors reading. As a simple example is an algorithm to extract reading
from these sensors in a specic time t. In our work here we ignored the simulation of the
sampling time for the LRF. Instead we arranged to extract one reading at each location of
the robot in the environment. We found this more than enough to simulate the sampling
time of LRF because in real applications we not expect to get a one reading in each
location.
Method for Autonomous Picking of Paper Reels 21
assumed it could be stimulate the error in the odometry readings. To make
the calculation easier the error between X and Y positions is assumed to be
uncorrelated. The error in location is simulated as follows :
P
noisy
= P +
P
P
t1
N(0,
p
) (3.6)
where P is X, Y or . Finally dierent is chosen for each of them to
make the error uncorrelated.
The second type of error is the error in LRF reading. Also this kind of
error is simulated by the same way as the previous one and as the following
equation illustrate.
d
noisy
(i) = d(i) +
d(i)
d
max
N(0,
d
) (3.7)
Where d
max
is the maximum range of the LRF. The main dierence be-
tween the two kind of errors is that the error in equation 3.6 are accumulated
by time while it is not in equation 3.7.
3.2.3 Segmentation and Feature Extraction
Data taken from LRF is in the polar form as P
i
= (r
i
,
i
) where
i
is the
direction of the point according to the LRF location. Converting these data
to Cartesian form as P
i
= (x
i
, y
i
) will generate an unnecessary error due to
approximation process. So we decide to work with all data in the polar form
only for the segmentation purpose. However several methods of segmenta-
tion algorithm are studied and tested,(the all methods in [11]). Then the
most popular one as shown in follows is chosen because it gives suitable re-
sults without needing for complicated calculations. This algorithm known as
(PDBS) (Point-Distance-Based-Segmentation Method). This algorithm re-
turns each segment as a ag for the location of start and end points. The
detailed work in [11].
Consider a full scan image of size N points P
s
. The scan image is described
as S
i
= {(
i
, r
i
) | i = k : n} where 1 k < n N.
The segmentation condition is, if D(r
i
, r
i+1
) > D
thd
then segments are
separated otherwise do nothing. D is the Euclidean distance as in equation
3.8.
D(P
i
, P
i+1
) =
_
r
2
i
+ r
2
i+1
2r
i
r
i+1
cos(
i

i+1
) (3.8)
and D
thd
is dened as follows.
D
thd
= C
0
+ C
1
min{r
i
, r
i+1
} (3.9)
22 Chapter 3. Methods
C
1
=
_
2(1 cos(
i

i+1
)) =
D(P
i
, P
i+1
)
r
i
(3.10)
C
0
is the constant parameter that used for noise reduction.
The results of segmentation using the above two algorithms are as in
gures ?? to ??.
Circle tting
Fitting circle means nd the most suitable center and radius describe
all points in the segment. Since the data which is extracted from LRF noisy
and not accurate We used two combined methods to nd the most suitable
circle. The work depended totally on the works of Chernov and Lesort [3]
and [4]. The circle equation is
(x a)
2
+ (y b)
2
= R
2
(3.11)
where, (x,y) the coordination of any point on the circle in Cartesian coor-
dinate, (a,b) the coordinates of the center point and r is the radius of the
circle. To t our segment points, which they are not less than four points, to
a circle we started from algebraic tting method. Then we used the result
as an initial guess to the geometric t. We used the two methods that are
suggested in [4] Newton-based, Taubin algebraic t because its an optimize
for speed. Then we moved to use the geometric method that suggested by
Levenberg-Marquardt to t circle in full (a,b) and R space.
The noisy reading of the Laser Range Finder (LRF) generate a wide range
of expected circle even when we used the most suitable circle tting. A deep
explanation about that is explained in [10]. However the uncertainty in the
three parameters are calculated as follows.
C =
2
(WW

)
1
(3.12)
Where ( )

refers to transpose, ( )
1
refers to inverse and C is the covari-
ance matrix in the three parameters, (X
center
, Y
center
, R), is the unbiased
stranded deviation of the radius R from estimated center to each observed
point and it calculated as follows:

2
=
(e
2
Ri
)
n 3
(3.13)
Where n is the number of points in the observed segment, and e
Ri
is
the dierence between the estimated radius and the distance from estimated
center to each point in the segment d(C
a,b
; P
x,y
).
e
Ri
= R
_
(a x
i
)
2
+ (b y
i
)
2
(3.14)
Method for Autonomous Picking of Paper Reels 23
The matrix W is a (nx3) matrix described by the three column u v and 1
W =
_

_
u
1
v
1
1
u
2
v
2
1
. . .
. . .
. . .
u
n
v
n
1
_

_
u
i
=
x
i
a
R
(3.15)
v
i
=
y
i
b
R
i = 1, 2, 3...n;
The covariance matrix is used- as we mentioned in previous subsection-
to recognize the most suitable circle and to reject others. The way of using
that was by testing the summation of the diagonal elements of the matrix
and compare it with a predened threshold. In our case we used a xed
value (0.2, the error uncertainty of the LRF that provided from manufacturer
specication). So the test condition was
(3 0.2
2
) > (
2
a
+
2
b
+
2
R
) (3.16)
Once the three parameter of circle a, b and R and the covariance ma-
trix are calculated the result is exported to a local map as a circle with its
uncertainty in location and dimension.
Line tting
Fitting data points to a line is more easier than tting them to a circle. It
could be done directly by using Weighted linear regression as mentioned in
[13] . So by considering Figure 3.4 and by considering the line equation
cos cos + sin sin r = 0 (3.17)
cos( ) r = 0 (3.18)
where the following symbol R i, Theta i, alfa, and d in gure is
i
,
i
,
and d
i
in equations. The idea is to minimize the sum of perpendicular dis-
tance d
i
, the error to be zero. Because of error existence the above equation
can be written

i
cos(
i
) r = d
i
(3.19)
24 Chapter 3. Methods
Figure 3.4: Line tting problem.
The error that is wanted to minimize is
S = d
2
i
= (
i
cos(
i
) r)
2
(3.20)
Considering the weight of each point in the scan image w to be the inverse
of that point variance w =
1

2
i
the scan points can regress to a line that
is described by its norm length r and norm angle by the two following
equations.
=
1
2
tan
1
_
w
i

2
i
sin 2
i

2
w
i
w
i

i
cos(
i
)w
j

j
sin(
j
)
w
i

2
i
cos 2
i

1
w
i
w
i
w
j

j
cos(
i
+
j
)
_
(3.21)
r =
w
i

i
cos(
i
)
w
i
(3.22)
Fortunately the weight w for points are same so we do not want to worry
about this value. This is because we are doing segmentation for scan image
from one LRF in each time. The value of the weight w becomes important
when one sum the scan points from dierent sensors then try to segment
them. The above two equations are written as follows
=
1
2
tan
1
_

2
i
sin 2
i

2
n

i
cos(
i
)
j
sin(
j
)

2
i
cos 2
i

1
n

j
cos(
i
+
j
)
_
(3.23)
r =

i
cos(
i
)
n
(3.24)
Where, n is the total number of points in the segment, i and j are the
sum counters i = 1, 2, . . ., n and j = 1, 2, . . ., n.
Method for Autonomous Picking of Paper Reels 25
Figure 3.5: Circle tting results.

(LRF beam error variance) here is


0.05.
Figure 3.6: Weighted linear regression.
Note there is high error in calculating
start and end points using equation 3.30.

(LRF beam error variance) here is


0.05.
The propagation of uncertainty for these two parameters during extrac-
tion is done according to the error propagation law 3.34 but by the ignoring
the rst term. So the uncertainty becomes.
C
r,
= F
,
C
X
F
T
,
(3.25)
where F
,
is the Jacobian matrix of r and according to each and
in the segment. So the size of F
,
is 2 n. T in the equation refers to
the transpose. C
X
is a diagonal matrix contain the variance of each point in
both dimensions and , which mean the size of C
X
is (n n).
C
r,
=
_

2


r,

r,

2
r
_
(3.26)
C
x
=
_

1
0 0
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2

n
0 0
.
.
.
.
.
. 0
2

1
0
.
.
.
.
.
.
.
.
. 0
.
.
.
.
.
.
0 0 0
2

n
_

_
(3.27)
F
,
=
_

n
r

1

r

n
r

1

r

n
_
(3.28)
26 Chapter 3. Methods
For the tted line one can calculate the start and end point of the segment
by calculating Y as
Y =
cos
sin
X +
r
sin
(3.29)
However, when the extracted line is vertical, which means the value of is
near to 0 or using equation 3.29 becomes useless because the value of the
dominator in the equation goes to zero. So the equation is converted to nd
the value of X instead of Y .
X =
sin
cos
Y +
r
cos
(3.30)
Finally to minimize the error of the feature extractor, remove from the
map all lines with the following condition.
(2 0.2
2
) > (
2
r
+
2

) (3.31)
where 0.2 is the LRF beam error variance provided by manufacturer as in
equation 3.16
3.3 Map Building
Once all feature from the laser scanner readings are taken the process o
map building starts. The procedure of map building consists from three
steps. The rst step is to build local map from each laser range scanning.
The second one is to match local maps to other map from previous readings
and from other Laser scanners then to sum them together generating a global
map. The nal step is to match our global map with the predene map that
we have to discriminate each object and give it its appropriate description.
Here we are going to describe each of the last two steps, knowing that, the
rst step is already described above. Where in the end of the process of
feature extraction above, the local map have been already generated. So
here we are going to speak about matching and summation algorithm.
3.3.1 Matching algorithm
From section 3.2 one can see that features at each map are described in
two ways, lines and circles. Lines describes walls and circles describes both
paper reels and concrete columns. By the way, at each scanning there are
three dierent maps expected. The previous map from previous scanning,
the predicted map of the previous one and the current map from current
scanning. If there are more than one LRF available more current maps can
Method for Autonomous Picking of Paper Reels 27
be considered, one from each laser scanner. So matching algorithm between
these maps are necessary to nd the corresponding features between each
maps with the rest. The matching algorithm which is described here consists
from two levels. The rst one to match all expected maps to each other. The
second one is to match resulted map to the predened one and discriminate
paper reels from the concrete columns, and factory walls from other walls
and other obstacles.
The local maps that are created from extracted features in section 3.2 are
relative maps. This means that each map is described in the sensor frame.
So the rst step in the matcher is to rotate each map to the global frame.
Because of extraction features return three parameters in the case of circle
(Radius, two coordinates for the center) and in the case of the line returns
four parameters (two points, start and end of the line, (two coordinates for
each), the slop of the line and its intersection with Y axis. So the rotation and
transformation is direct if both the location of the sensor and the location of
the robot are known.
For circles the most important thing is the center. So the new location
of circles are
C = [C
x
C
y
1]
T
(3.32)
C
n
= TRC (3.33)
Where T is the homogeneous translation rotation matrix according to
robot location. R is the homogeneous translation rotation matrix accord-
ing to sensor location. Fortunately, the third parameter of the circle is not
aected by the rotation translation. The uncertainty of the location of the
circle is aected by this rotation translation. The main important eect is
to the uncertainty of the location of the robot, because this uncertainty is
changed by time, and may increase too. The uncertainty of this location is
reected to the location of the features according to the law of error propa-
gation

n
= F
C
F
T
C
+ F
in

in
F
T
in
(3.34)
where refers to the uncertainty in the coordinate of the center of each
circle and refers to the uncertainty of the location of the robot. F
in
refers to the Jacobian matrix of the location parameters [x y ]

. F
C
is the Jacobian matrix of the rotation-translation function according to the
parameters of the center in 2D [X
c
Y
c
] as follows.
F
C
=
_
_
cos() sin()
sin() cos()
0 0
_
_
28 Chapter 3. Methods
The translation of the lines are done by the same way as in the circle.
The only dierence is considering the norm length and norm direction of the
lines.
The second step of the matcher is to nd the corresponding features
between each two maps. This is done by the following steps.
For each circle, calculate the geometric distance between its center and
the other centres in the map. The correspondence circle in the other
map is the circle with the center to center distance less than threshold,
( the best thing is to choose the radius of any of them as a threshold).
For each line, calculate the geometric distance between its (norm r, and
norm direction ). The correspondence line, is the line with a dierence
less than specic threshold too. However, this is not the only condition
for lines.
If two lines have the same slope and intersection, then check if there
is overlapping between them. If there is then the two lines are corre-
sponding. Otherwise, if the distance between these two lines less than
specic threshold consider them as one, else keep each one of them
alone.
Calculate the perpendicular distance between each circle and each line
in the map. If the distance less than the radius of the circle, then
calculate the intersection between the line and the circle. If there is
any intersection or overlapping then remove the object with less sum
of uncertainty matrix diagonal.
After nding the corresponding features from the two dierent maps apply
the Kalman lter in static mode [13] (update stage) to calculate the new
features and new features uncertainty.
P
n
= P
m1
+ K (P
m2
P
m1
) (3.35)
1

n
=
1

m1
+
1

m2
(3.36)
K =

m1

m1
+
m2
Kalman filter gain (3.37)
Where
P
n
is the new estimated parameter of any feature.
P
m1
is the parameters estimated for the feature from map number 1.

m1
is the associated estimated uncertainty for the feature from map
number 1.
Method for Autonomous Picking of Paper Reels 29
P
m2
is the parameters estimated for the feature from map number 2.

m2
is the associated estimated uncertainty for the feature from map
number 2.

n
is the new estimated uncertainty.
The second stage (level) of the matcher is to discriminate all features. This
is done by matching the current, new map with the predened map using
the same procedure as above. Here the matcher discriminate circles to paper
reels or columns and discriminate lines to factory walls, container walls or
obstacles.
The assumption is the map does not contain the location of paper reel.
The matcher tries to nd the correspondence circle to the correspondence
column. If this is the case, the circle will agged as column. If there is no
correspondence column in the predened the circle is considered as a paper
reel. In the case of lines, the matcher tries to nd the correspondence factory
wall, or container wall to the line feature. If this correspondence is exist, then
the line is considered as one of them, according to the nearest. If the case is
not exist, simply the line consider as an matcher error and its removed from
the map. By this way the matcher give us an accurate map. In the end of this
stage the matcher calculate the uncertainty of the location of the robot and of
the feature extractor. This is done by considering the columns as landmark
for the position coordinates and walls as land mark for direction coordinate.
The main assumption here is the rotation and translation between the built
map and the predened one, according to previous process, is small enough
to nd the exact correspondence between features, which guarantee that we
do not have error in the matcher results.
To understand what is happening here take a look to the Figure 3.7. The
total rotation of the built map can express only by the dierence in the angle
of lines segments. So only what is necessary to do here is to nd the angle
of each line segment and compare it with the angel of corresponding line
in the predened one.
The rotation in angel becomes ( =
lineSegment

wall
). Therefore,
the uncertainty in the estimated slop of any line segment reects directly
to the dierence in the direction. So one can calculate the dierence in the
direction directly as follows,

n
i=1
(
line Segment
i

wall
j
)
n
(3.38)
And the uncertainty of this dierence in direction as follows

2

n
i
((
i


)
2
Segment
i
slope
)
n
(3.39)
30 Chapter 3. Methods
Figure 3.7: example of matching created map with predened one
The translation is calculated by the same way but by using the center of
columns as references instead of walls. Note that, calculation of translation
have to be done after applying rotation to the whale map.
3.3.2 Map updating and map generation
Once the matching process nished, the current global map is generated. The
next step is to keep track of all occurred and disappeared paper reels in the
environment. After discriminating all features in matcher, the paper reels are
extracted to a special data structure such as array. In each localization step,
and to make using of Kalman lter in its dynamic mode, the new location
of robot itself and the new location of each paper real in the data structure
is predicted according to the velocity and direction of the robot. Then by
using Kalman lter as described in equations 3.35 to 3.37 each update step
add a new paper reel to the map. If the predicted step does not predict a
feature (paper reel), i.e. if a paper real is detected by the scanners but it not
found in the data structure, a new paper reel is added to the data structure
hence to the map. If a paper real is found in the data structure, and its
new location is predicted, but there is no update stage for it, i.e. it does
not appear in the scanners images, this paper reel directly removed from the
data structure hence from the map. Here, because while the robot moving
dierent paper reels go out of the range of the scanner, the main eort spent
Method for Autonomous Picking of Paper Reels 31
to keep only track of those reels that locate in the range of LRF scanner or
scanners.
3.4 Paper Reel Selection
The main topic of this research is to know the priority of paper reels selection
and the direction of grasping the paper reel. First of all, the location of
paper reel is calculated and estimated accurately from above procedures.
The importance of this point appears when one know that a small dierence
in distance, say 5 cm, in the estimation of the paper reel may destroy the reel
totally. In this section we are going to discuss the algorithm that allows to
the vehicle to choose the most suitable reel and decide from what direction
it should be picked up.
3.4.1 Paper availability and reel priority
As described in section 3.3.2 above, once paper reel is detected, it is extracted
to a special data structure. The process of extraction is rst appear rst
extract no mater where is the reel or how much it is far from the truck.
The logical assumption that one can consider to answer the question which
reel is better to select rst? is to select the nearest most accessible reel. So
the selection algorithm has three stages to nd the proper reel and take a
decision.
1. Find the most accessible reel.
2. Find the nearest one of the accessible reels.
3. Update the path of the robot on line if rst appears paper reels all are
not accessible.
Before calculating the accessibility, each target reels, assumes to be acces-
sible from the 360 degree. However, because it wants a lot of computational
power and space, the assumption is to check accessibility in intervals, which
is sucient enough to give us a clear evidence about what direction is ac-
cessible. So rst of all each paper reel has been given 72 accessible direction
by step of 5 degrees. This is start from 0 direction with the horizon to 359
degrees see Fig. 3.8.
Accessibility direction of paper reels means that there is no obstacle in
the truck way to grasp it from that direction, i.e. the truck has to be able
to manoeuvre if it move toward the reel from that direction. The truck is
32 Chapter 3. Methods
Figure 3.8: Each reel has 360
accessibility angles. Instead, It
is considered as interval. Thus
each reel has 72 accessibility
angels.
Figure 3.9: Each object is considered as an obsta-
cle if the distance between that object and the reel
more than 1.7 of the length of the truck until the
end of the gripers
able to manoeuvre freely if the shortest distance between the surface of the
reel and any obstacle surface more than 1.5 times of the total truck length
2
. Obstacles could be walls, columns or other paper reel. If the distance
between any obstacle and the reel less than 1.5 times the total length of the
truck, the angle of accessibility is calculated as in the following procedure.
Make the following denitions;
Accessibility angle () is the angle between the center of reel
and the horizon where there is not any obstacle.
Picking angle () is the robot heading angle (between the robot
and the horizon). In the moment of grasping action . By this two
denition and as its appear from Figure 3.11 the relation between
these two angles is given by
= +
2
This value is chosen by experiment in the simulated environment. We believe that
there is better way to choose this value more accurately. However, for the sake of this
research this assumption is sucient
Method for Autonomous Picking of Paper Reels 33
Figure 3.10: If the obsta-
cle is some thing like wall,
i.e the reel location is close
to wall.
Centre to center distance is the distance between the center
of the target paper reel and the center of the target obstacle in
the case of circular obstacle and to the surface of the obstacle
otherwise, the shortest one, P.
Centre to Centre angle is the angle between the center line and
the horizon, in the case of circular obstacle.
is the angle between the picking direction and the center line
direction.
The obstacle objects are divided into two groups according to the size
of its surface and the diameter of the reel. The very huge surface,
(Lines; walls) and limited surface size as (Circles; columns and other
reels).
In the case of very huge surface (walls) the shortest distance between
34 Chapter 3. Methods
the center of the reel and the obstacle is the perpendicular distance to
that obstacle. The direction of robot movement (picking direction), the
reel and the obstacle creates a triangle as its appear in gure 3.10. In
the gure below shows that the inaccessible directions are located be-
tween obstacle, reel and two picking angles, say

theta
min
and

theta
max
.
If the minimum required distance, that allow to the robot manoeuvres
1.5d, where d is the total length of the truck, is considered. Then these
two angles are calculated as follows, see Figure 3.10

theta
min
= floor(
mod(

2
+ sin
1
(
Prg
d+g
), 2)
180a/
) (
180a

) (3.40)

theta
max
= ceil(
mod( +

2
sin
1
(
Prg
d+g
), 2)
180a/
) (
180a

) (3.41)
where P is the distance from the center of the reel to the obstacle. r
is the radius of the reel and a is the interval length (in our work here
a = 5
o
). is the obstacle norm direction. g is a safty distance to
insure safe calculations. In this case we assumed it to be two times
the gripper width. Then the possible accessible directions is the rest
of interval from

theta
max
to

theta
min
moving in the positive direction
(counter clockwise).
In the case of circular obstacle. The robot calculate the maximum inac-
cessible direction and the minimum one according to the belt problem
[9]. See Fig. 3.11. By using the denition above. The maximum
inaccessible angel is given by

theta
min
= floor(
mod(( + ), 2)
180a

) (
180a

) (3.42)
the maximum inaccessible angel is given by

theta
max
= ceil(
mod(( + + ), 2)
180a

) (
180a

) (3.43)
The possible accessible directions are direction intervals that are laying
between the maximum angle

theta
max
and the minimum one

theta
min
moving in the positive directions(counter clock wise). The main factor
in these two equations is the angle . This angle is calculated by the
equation
Method for Autonomous Picking of Paper Reels 35
Figure 3.11: Description of use of built problem to calculate the accessibility an-
gle. The dots refer to the possible picking direction. The Xs refer to inaccessible
directions.
= sin
1
(
r
1
+ r
2
+ g
P
) (3.44)
where r
1
and r
2
are the radius of the target reel and the obstacle re-
spectively. g is a safety distance as above.
Convert the circular representation of the accessible intervals to a linear
representation. I.e from gure 3.8 to gure 3.12. As it appears in gure
3.12 inaccessible direction are removed.
The process above is repeated for all obstacles appears around the
target reel. At each time the inaccessible angles are removed.
Once the accessibility of the reel is calculated the paper reel exports to the
another data structures ( say stack or array). The new data structure should
contain the position of the reel center, the radius, accessibility directions,
uncertainty of the location and diameter and the distance to the robot. Now
the picking priority is to the nearest available (accessible) paper reel. If there
is more than one paper reel has the same distance to the robot. The priority
is to reel that appears rst in the data structure. This data structure should
be valid until the robot move the reel from its position. In the moment that
36 Chapter 3. Methods
Figure 3.12: Change the presentation of accessibility angles to linear presentation
then remove the inaccessible angles and ag them.
this reel has been moved this data structure has to be deleted then created
again according to the new situations for the rest of reels.
3.4.2 What is the grasping direction?
Once the most accessible nearest paper reel is chosen the algorithm ignores
all other reels. If this reel has more than one access direction the algorithm
chooses the middle one as picking direction. If the reel has only one, then
there no chance to chose other direction. Just use it. If this reel has two
the algorithm chose any of them, the rst appears has the rst priority. If
there are more than one but they are in dierent sides of the reel say north
and south, calculate the distance to the reel using the middle one for each of
them then take the shortest one.
Calculation of this distance means that the robot should be able to gen-
erate its path to the reel from that direction on-line, section 3.1.2. Then the
length of that path should calculated according to the shape of that path.
Chapter 4
Results
The working environment had been modelled as a map of line segments
and circles. Walls and other type of obstacles are represented as lines while
columns and paper reels are represented as circles. Such representation could
be accepted in dierent environments. Lines are represented by two point
for each line P
start
and P
end
. Circles are represented by only one point P
center
and radius r. Objects are classied by labels, each object type has one label.
For example 1 for walls, 3 for columns and so on.
Over this environment we done all our works. Some-times and at some
stages we divide this environment for serving some purpose as when we dealt
with map building. While we considered the whole map as an input for the
LRF, we only used the environment without paper reels to robot navigation.
Then we leave algorithms to discover the rest of the environment ( paper
reels).
Figure 4.1: Testing environment Figure 4.2: The truck shape and
the arrangement that is chosen to the
LRF tting on the truck. Both are
tted on the same hight.
37
38 Chapter 4. Results
This environment have been simulated based on Matlab. The implemen-
tation of the methods above was in the same procedure as that methods were
mentioned in the method chapter. A global map has been built from two
range nders. The map is analysed to nd the best arrangement of selecting
paper reels. Then the proper reel has chosen and the proper direction has
dened.
The testing environment consist of 4 warehouse walls, one of them is
divided into two segments around an entrance for a container, a 3 walls of
container. 42 concrete columns with diameter of 1.5m and 48 paper reels with
the same diameter as the concrete columns. The environment is illustrated
in Figure 4.1. The black lines represent the warehouse walls. The blue lines
represent the container. The black circles are the concrete columns. The red
ones are the paper reels. The environment size is 80 60m
2
. The distance
between each column and the other one is 12m in X axis and 7m in Y
axis. The size of the container is 8 22m
2 1
. As appears in the gure some
of columns are located among the paper reel stacks which was one of the
challenges.
The truck is simulated to look like an ordinary one. The size of truck is
arbitrary. The length from the end point to the fork face is 2500mm. The
total width is 1600mm. The fork size is 120050mm
2
. The other dimensions
is not necessary here because we are working with 2D environment. This is
illustrated in Figure 4.2.
The laser scanners are simulated by calculating the intersection between
each laser beam and each feature of the map. A normal distributed error
is added to the scanning image to give some noise to the image to be more
realistic. A variance of 20mm per each meter and 0.1 degree per each degree
are chosen to be the error deviation. The resolution of the scanner is 0.5
degree, which means two beams in each degree. The maximum range of the
laser beam have been chosen to be 35m. Two laser scanner are added in the
simulation environment. The rst one is xed in the middle of the forklift,
and directed straight forward. This laser scanner covers 180
o
of directions
90
o
degree from each size. The other one is xed beside the rst one but
with a little bit rotation. See gure 4.2 for LRFs arrangements. This one
covers area of 270 degrees, 135 degrees from each direction.
The implementation of LRF simulation was by inspecting and calculating
intersection between each LASER beam and each object in the map. This
function order becomes O(nm) where n is the number of LASER beams and
1
The size of each component in this environment is built to be near to the actual size
in the reel world. However, because the high number of choices in the practice, the chosen
size is not accurate, especially for truck simulation.
Method for Autonomous Picking of Paper Reels 39
m is the number of objects in the map. The rst run for the LASER simulator
function takes totally 3 minutes to convert and it was only for one scan. To
minimise the time of implementation we removed from the map all objects
that located outside the range of the LRF. Also in the rst implementation
the length of each laser beam was 80m but later on to minimise the time
as much as possible the length of beams is minimized to be as mentioned
above. The current implementation takes up to 1 minute in the worst case.
Figure 4.3 shows the resulted scans from the two LRFs. The blue points
is the scans from the rst one. The magenta points are the scans from the
second one. The gure shows the scans relative to the LRF position. So the
scans are rotated and translated to the origin. The dierence in scan points
between scan images is the dierence in rotation and location of the two LRF
on the forklift body. Figure 4.4 illustrates the error in scan points.
Figure 4.3: Scanning image from simulated LRF. The image shows that the image
is relative to the scanner position
All scan points are extracted in a polar form P
s
= (, r). So rst all
points are converted to Cartesian coordinate then rotated to global frame.
Then the returns to the polar form again. After that they are passed to
process in segmentation algorithm. Figure 4.5 shows the scans after rotating
them and translating them to the global frame.
40 Chapter 4. Results
Figure 4.4: Simulated errors
in scans
Figure 4.5: Scanning points after preprocessing.
All point are rotated and translated to the global
map.
4.1 Segmentation
Five dierent type of segmentation algorithms have been tested. The one
that mentioned in section 3.2.3 and the other four algorithms that described
in [11]. We were interesting in the adaptive point algorithms, especially the
one that we describe in section 3.2.3 and in kalman lter algorithms, espe-
cially the second one. The two Kalman lter algorithms that are described
in [11] are predicts the next point in the scan image according to previous
points behaviour. However, both of them are very good algorithms if the scan
images present the environment features as lines only where the behaviour of
next point is expectable. So in our case they are useless because we have two
kind of features; circles and lines. In a circle such as the one that is illustrated
in gure 4.4 the algorithm segments points to two or more groups according
to the direction of gradient. So the algorithm was inapplicable in our case.
The poor and the few numbers of features that can detects by these two algo-
rithms (which based on Kalman lter) were the main reason to reject both of
them. The three of adaptive point to point distance that represented in [11]
also have been tested. The main thing about these algorithms is that these
algorithms depends on one or two constant parameters as a noise reductions.
According to choice of these parameters the segmentation process could be
rich or poor. However, there is a limit to how much this process could be rich.
Finally we chose the (Point-Distance-Based-Segmentation Method, (PDBS)
[11]).
In table 4.1 above, the eective segments means the segments that it
possible to interpret to meaningful feature. The correct segment means the
segments that cover a real features. However, the incorrect segments are the
Method for Autonomous Picking of Paper Reels 41
C
0
0.5
r

r
1.5
r
Total number of segments 12 15 17
Number of eective segments 6 8 9
Number of correct segments 6 8 9
Number of incorrect segments 4 1 1
Average length of a segment 10.9 12.2 13.6
Average length of an eective segment 10.8 13 15.3
Number of unconsidered scan points(of 360) 229 177 129
Table 4.1: A comparison between segmentation results; number of eective seg-
ments and length of segments when three values of noise reduction constant are
used.
Figure 4.6: Extracting features and building local map.

here is 0.2. Good


estimation for line segment in the bottom.
segments who considered only part of points that reected from a real object.
Its obvious that the number of features that are detected are more when
C
0
is higher. However, the error in the fringes (the limit range of the LRF)
are also more. The best choice according to these gures and the table is to
choose the value of C
0
to the be the highest. However, when the distance
from point to its neighbour point distance is not really big choosing of C
0
to be high shows a bad result and high error in feature detection see gure
4.8. In this gure the feature in location (20, 50) is totally wrong. Instead of
segment the two circles to two dierent segments the procedure returns one
segment which tted on a circle. From another hand the line on the top left
of the gure are segmented to several segments where each of them can t a
circle with a smaller radius than a threshold which leads to reject all lines in
that place. So in the result choosing the value of C
0
to be equal to
r
was
42 Chapter 4. Results
Figure 4.7: Extracting features and building local map.

here is 0.2. Note the


bad estimation of line in the left
the best choice for us.
4.2 Extracting Map
f Uncertainty Parameters Error
# U
C
x
U
C
y
U
R
C
x
C
y
R e
x
e
y
e
R
L
1
0.1222
*10
3
0.0574
*10
3
1.557 0.173 0.014 0.173
C
2
0.2131
*10
3
0.3245
*10
3
0.3611
*10
3
11.998 6.000 0.751 0.001 0.000 0.001
C
3
0.0052 0.0008 0.0038 23.955 6.008 0.701 0.045 0.008 0.051
C
4
0.0968 0.1460 0.0810 36.084 5.830 0.877 0.084 0.170 0.127
C
5
0.0084 0.0014 0.0059 23.880 13.933 0.672 0.120 0.067 0.075
C
6
0.2186 0.3546 0.3805 12.002 14.008 0.755 0.002 0.008 0.005
C
7
0.0039 0.0173 0.0163 12.100 22.157 0.896 0.100 0.157 0.146
L
8
1.8148 0.0060 1.120 43.1028 error error
Table 4.2: Features extraction from one LRF scan, and building local map as in
g.4.6. L
i
refers to line feature. C
i
refers to circle feature. Empty place refers
to no reading available for this measure because its not available for that feature.
error refers to bad estimation.
The features are tted after the segmentation procedure directly. The cir-
cle tting procedure owed the algorithm that is mentioned in section 3.2.3.
The circle tting algorithm was start by tting a circle to its correspondence
Method for Autonomous Picking of Paper Reels 43
points algebraically then forwarding the result from this t to geometric t-
ting algorithm to nd an accurate t to the circle. We have used the matlab
scripts from [4] to do this step. We have used the Algebraic circle t by
Taubin, based on Newtons method. Then we forward the result to Geomet-
ric circle t algorithm which minimise the orthogonal distances based on the
standard Levenberg-Marquardt scheme in the full (a,b,R) parameter space.
f Uncertainty Parameters Error
# U
C
x
U
C
y
U
R
C
x
C
y
R e
x
e
y
e
R
C
1
0.0059 0.0086 0.0105 24.016 13.981 0.753 0.016 0.019 0.003
C
2
0.0090 0.0021 0.0058 35.803 21.930 0.611 0.197 0.070 0.139
C
3
0.0042 0.0012 0.0030 23.892 21.934 0.669 0.039 0.066 0.081
C
4
0.0029 0.0057 0.0046 23.961 29.960 0.709 0.039 0.040 0.041
C
5
0.0010 0.0071 0.0054 11.971 30.013 0.751 0.013 0.029 0.001
C
6
0.0474
*10
3
0.1713
*10
3
0.1485
*10
3
12.005 21.998 0.745 0.005 0.002 0.005
L
7
0.3644 0.0001 0.037 0.959 0.037 0.959 xxxx
Table 4.3: The uncertainty and location of features in g.4.7. L
i
refers to line
feature. C
i
refers to circle feature. Empty place refers to no reading available for
this measure because its not available for that feature.
Figure 4.8: The result of building local map from circles and lines.
The line tting algorithm also follows the method in section 3.2.3. How-
ever, in implementation and because there is an error in estimation the
44 Chapter 4. Results
value of , we have chosen an interval to apply equation 3.30. That is if
5
o
5
o
or 175
o
185
o
apply the equation 3.30 and apply equa-
tion 3.29 elsewhere. This process gave us a exibility to estimate accurate
start and end point of a line segment and by it we have escaped from the
problem of the constant X on the vertical lines.
f Uncertainty Parameters Error
# U
C
x
U
C
y
U
R
C
x
C
y
R e
x
e
y
e
R
C
1
0.0085 0.0011 0.0073 36.072 38.009 0.816 0.072 0.009 0.066
C
2
3.532
*10
5
1.284
*10
5
4.105
*10
5
24.001 37.999 0.751 0.001 0.001 0.001
C
3
0.0104 0.0266 0.0260 31.013 50.811 0.720 0.013 0.039 0.030
C
4
0.0081 0.0054 0.0101 27.744 50.682 0.555 0.136 0.168 0.195
C
5
0.0005 0.0026 0.0020 26.292 50.778 0.714 0.038 0.072 0.046
C
6
0.0006 0.0058 0.0044 24.014 46.085 0.804 0.014 0.085 0.054
C
7
0.0554 0.1069 0.1413 19.481 49.611 0.852 0.699 1.239 0.102
C
8
0.0022 0.0049 0.0031 12.157 53.757 0.620 0.157 0.243 0.130
C
9
0.0021 0.0014 0.0019 12.025 45.958 0.728 0.025 0.042 0.022
L
10
3.9193 0.0003 3.067 3.830 0.0746 3.830
Table 4.4: The uncertainty and location of features in g.4.8
Figure 4.9: Extracting features and building local map. The best estimation for
close paper reels.
Figures 3.6 shows the results from tting line using equation 3.29 instead
of equation 3.30. Its clear the error in estimating the start and end point of
the line segment. Figure 3.5 shows the result of tting a circle to its scan
points. In this gure its clear that there a segment where the algorithm of
extracting features reject its interpretation because of its high uncertainty in
both cases of line and circle tting (The segment in the bottom).
Method for Autonomous Picking of Paper Reels 45
f Uncertainty Parameters Error
# U
C
x
U
C
y
U
R
C
x
C
y
R e
x
e
y
e
R
C
1
0.0011 0.0002 0.0008 36.017 45.991 0.753 0.017 0.009 0.003
C
2
0.0024 0.0050 0.0047 37.115 50.585 0.523 0.065 0.265 0.227
C
3
0.0201 0.0697 0.0724 35.609 50.903 0.792 0.029 0.053 0.042
C
4
0.0029 0.0087 0.0078 34.085 50.749 0.673 0.005 0.101 0.077
C
5
0.0013 0.0051 0.0046 32.527 50.851 0.739 0.103 0.001 0.015
C
6
0.0009 0.0036 0.0031 30.974 50.877 0.775 0.006 0.027 0.025
C
7
0.0003 0.0013 0.0011 29.425 50.833 0.739 0.005 0.017 0.011
C
8
0.0996
*10
3
0.6666
*10
3
0.4763
*10
3
27.874 50.827 0.736 0.006 0.023 0.014
C
9
0.0795
*10
3
0.5340
*10
3
0.3700
*10
3
26.334 50.825 0.731 0.004 0.025 0.019
C
10
0.0105 0.0085 0.0167 24.753 50.834 0.757 0.027 0.016 0.007
C
11
0.0212 0.0256 0.0426 23.236 50.839 0.724 0.006 0.011 0.026
C
12
0.2934
*10
3
0.1069
*10
3
0.3173
*10
3
24.014 45.990 0.733 0.014 0.01 0.017
Table 4.5: The uncertainty and location of features in g.4.9
Figures 4.6 to 4.9 show the results from extracting one local map from one
LRF device by tting the line and circles. Table 4.2 is the correspondence
table to gure 4.6. Its contain the value of the uncertainty for each feature
and its parameter. The rst columns from the table is a classier column
where C and L refers to circle and line feature respectively. In case of line x
becomes and y becomes r.
In Figure 4.6 one can note that there is 7 features are tted correctly,
one features is ignored (24, 22), this happen because the high uncertainty
appears during tting this feature to a circle and because the radius of the
result circle from the tting procedure is less than the validation interval.
Where the validation interval is an interval of radiuss where if the estimated
circle radius lay inside this interval the circle is acceptable other wise its
rejected. In this case the tting was a little bit less than 0.5 meter so the
circle is rejected. The line L
8
in table 4.2 which is the linear feature in the
top of gure 4.6 is totally wrong. Its obvious from rst column in the table
which shows the uncertainty of the angle direction how much is high. Other
features estimation is acceptable except C
7
which shows high error in center
position estimation up to (10, 16)cm (columns 8, and 9) respectively. The
reason of this high error because its far from the range nder where the error
becomes higher.
Figure 4.7 and it correspondence table 4.3 shows the result of extracting
another map. The resulted features are acceptable again except the line L
7
which show high uncertainty for estimating the angel of the norm which
lead to bad estimation of the norm length almost 1 meter. Circle C
2
which
is the farthest column of the scanner, (LRF location is at (15,17)), also show
high error (14cm) in diameter estimation and (20, 7)cm in center estimation.
46 Chapter 4. Results
Figure 4.10: The sum-
mation of two maps into
bigger one. The biggest
circles represent the ex-
traction of paper reels
in rst map.The smallest
ones represent the extrac-
tion of them in the sec-
ond map. The middle
one show the summation
of both by using Kalman
lter.
The situation in Figure 4.8 and its corresponding table 4.4 is dierence.
Here we chose the noise reduction constant to be = 1.5
2
. Although there is
one paper reel that is estimated totally wrong due to the segmentation which
considered scan points from x = 18 to x = 20 as one segment, the result from
extracting other features are acceptable. As same as the situation in gures
4.6 and 4.7 the farthest features C
4
,C
8
andL
10
are bad estimated due to its
location from the scanner. The scanner here was in (18,38). C
4
which is a
special case the bad estimation for it because the few number of points that
used to t it (5 points).
Figure 4.9 is another example of extraction map. Here its clear that
almost paper reels are estimated very well. Expect circle C
2
in table 4.5.
This circle which estimated small and uncertain in position is estimated from
tting 5 points too and these points are the farthest from the scanner. What
to say here is that the best estimation for paper reels is for reels which are
close to the scanner.
4.3 Sum Maps into one
As mentioned in methods chapter summing map into one done by using
Kalman Filter. Figures 4.10 until 4.18 show the eect of combining maps
together using Kalman lter. Figures 4.10 just an example of the result for
estimation of pillars location. In the gure its obvious the enhancement that
happens in the new map (black circle) which give us a hand over uncertain
estimation from any single map.
Method for Autonomous Picking of Paper Reels 47
Figure 4.11: The result of summation two line segments together.The algorithm
sums the rst two left segments but it does not add to them the third line.
f Uncertainty Parameters Points
# U

U
r
r x1 y1 x2 y2
L1
M1
0.0025 0.0000 4.7349 0.3961 17.7556 0.0037 18.7644 0.0265
L1
M2
0.6526
*10
5
0.2631
*10
5
4.7138 0.2650 8.2751 -0.0149 11.9015 -0.0098
L2
M2
0.1447
*10
3
0.0299
*10
3
4.7138 0.6381 17.5606 -0.0312 19.3631 0.0311
L1
result
0.6491
*10
5
0.2484
*10
5
4.7066 0.0472 8.2751 -0.0951 18.7644 -0.1558
L2
result
0.1447
*10
3
0.0299
*10
3
4.7138 0.6381 17.5606 -0.0312 19.3631 0.0311
Table 4.6: Summation of two lines into global map,g.4.11 a case spot light
Table 4.6 gives a numerical results for the case in gure 4.11. In this case
the algorithm applies Kalman Filter very well. However, only two of three
segments line are summed together to the new map while the other one is
transferred as it. Such cases happen because in the algorithm we supposed
that there is only one suitable feature object for each object from rst map
in the second map or there is nothing. So whenever two or more objects
appear from second map are corresponding to one object from the rst one,
the rst object is summed and the rest are transferred and the same thing
has been done if two objects from rst map are corresponding to one object
in the second map.
Table 4.7 and gure 4.12 are giving another example of another case that
happens when adding vertical lines together to a global map. The result
sometimes becomes unexpected as in this case here. The result out of both
two inputs. Our explination to such situation is, if we look to the uncertainty
48 Chapter 4. Results
Figure 4.12: The adding
of two segments when they
are verticals. Note that
the result not as expected,
where one can expect the
result should lay some-
where between the two
smallest lines but what
happen, that is the new
longer line segment lay out
of both original segments.
f Uncertainty Parameters Points
# U

U
r
r x1 y1 x2 y2
L
M1
0.003 0.0000 0.0075 0.128 0.285 20.969 0.256 17.109
L
M2
0.246
*10
5
0.163
*10
5
0.0008 0.011 0.028 22.013 0.017 7.787
L
result
0.206
*10
5
0.122
*10
5
6.2532 0.041 0.701 22.013 0.275 7.787
Table 4.7: Summation of two Vertical lines. Objects in gure 4.12. A case spot
light
of the length of the norm estimation (L
M1
,U
r
) in the table 4.7 we can note
that the uncertainty is zero which lead to singular matrix or matrix bad
estimation one when we use it in Kalman lter. The same thing happens in
gure 4.13 and table 4.8.
Combining circles in gure 4.13 and table 4.8 is worse than both initial
objects. However, comparing with exact location of this pillar (24, 6) and
R = 0.75 we can note from table the result tries to minimise the error in
Y-direction, look to the table 4.8.
Figure 4.14 shows one of the best results of summing two maps together
Figure 4.13:
Other example of
summation of two
circle columns
or paper reels
from two dierent
map using Kalman
lter. The black
one is the result of
the summation.
Method for Autonomous Picking of Paper Reels 49
f Uncertainty Parameters
# U
C
x
U
C
y
U
R
C
x
C
y
R
C
M1
0.1196 0.0057 0.1097 24.2112 6.2693 0.8928
C
M2
0.0766 0.0069 0.0653 24.0375 6.1140 0.7986
C
result
0.0404 0.0029 0.0359 24.2729 6.1916 0.9767
Table 4.8: result of Summation two pillars as in g.4.13
Figure 4.14:
Summation of
two local maps into
one again. The
bottom line seg-
ments are summed
correctly together
while one segment
appeared in the left.
This segment simply
transferred to the
new map without
change.
50 Chapter 4. Results
Figure 4.15: In this gure two local map
are summed together the map from Fig-
ure 4.16 and the map from Figure 4.17.
Note the dierence between them.
where we do not have a lot of special cases as gures 4.13 and 4.11 above.
Figure 4.15 gives another example of summing tow local map from two
dierent LRF. These two maps are represented in Figures 4.16 and 4.17. Note
that the number of extracted features in second map 4.17 is more than the
number of them in the rst one 4.16. This is because second LRF is simulated
to has a wider range comparing with the rst one (270 angles range, while the
rst one only 180 angles range). So that the extra objects are transferred to
the global map(Figure 4.15) as they while other objects (paper reels here) are
summed using kalman lter. Note also the bad estimation of line. This bad
estimation happens because of error in segmentation algorithm. Such errors
can be avoided or minimized by decreasing the value of the noise reduction
constant C
0
in segmentation algorithm. Table 4.9 shows a numerical results
of summation the two maps in Figures 4.16 and 4.17. The last part of the
table (sum) shows the error of paper reels position estimation comparing with
its true location. Its obvious how the position estimation in the nal result
enhanced. Figure 4.18 shows the result of building global map with no
prior knowledge about the environment. The map in this gure are built from
112 scan image, 56 scan image from each laser range nder while the robot
was moving from location (10, 10) to (30, 50) in almost straight line. Some
Method for Autonomous Picking of Paper Reels 51
Parameters Uncertainty Error
C
x
C
y
R U
C
x
U
C
y
U
R
e
x
e
y
e
R
Map
1
36.0092 46.0093 0.771 0.0038 0.0005 0.0028 0.0092 0.0093 0.021
35.6629 50.7907 0.7177 0.0079 0.0167 0.0202 0.0329 0.0593 0.0323
32.5118 50.8821 0.7735 0.0014 0.0056 0.0053 0.0182 0.0321 0.0235
23.265 50.8134 0.6972 0.0118 0.0291 0.0359 0.035 0.0366 0.0528
Map
2
35.9972 45.9978 0.7501 0.0013 0.0002 0.0009 0.0028 0.0022 0.0001
35.6202 50.9086 0.791 0.0057 0.0225 0.0232 0.0098 0.0586 0.041
32.5657 50.8704 0.7849 0.0017 0.0045 0.0049 0.0357 0.0204 0.0349
23.2416 50.787 0.6869 0.0049 0.0125 0.0147 0.0116 0.063 0.0631
Sum
35.997 46.0009 0.754 0.00094 0.00016 0.00066 0.003 0.0009 0.004
35.6164 50.7885 0.6908 0.0032 0.0086 0.0096 0.0136 0.0615 0.0592
32.5319 50.8566 0.7593 0.0008 0.0024 0.0024 0.0019 0.0066 0.0093
23.2476 50.7942 0.6898 0.0034 0.0085 0.0102 0.0176 0.0558 0.0602
Table 4.9: The result of fuse two local map together using Kalman lter.Map
1
represents some features from Figure 4.16.Map
2
represents the same of them from
Figure 4.17. The last part of the table show the result of summing them together
as appears in Figure 4.15.
Table 4.10: The position, uncertainty and error of the same paper reels and column
from table 4.9 after accumulating several local map. These feature are represented
in a right place as appears in Figure 4.18.
Parameters Uncertainty Error
C
x
C
y
R U
C
x
U
C
y
U
R
e
x
e
y
e
R
35.9925 46.0008 0.7446 10
04

0.15
10
04

0.039
10
04

0.11
0.0075 0.0008 0.0054
35.5979 50.8141 0.7058 10
03

0.14
10
03

0.48
10
03

0.5
0.0321 0.0359 0.0442
32.5265s 50.8454 0.7443 10
04

0.0509
10
04

0.0915
10
04

0.1168
0.0035 0.0046 0.0057
23.2171 50.7173 0.6564 10
03

0.077
10
03

0.4467
10
03

0.2763
0.0129 0.1327 0.0936
features in the map are estimated in a acceptable way and detected. However,
some features like the concrete pillars in (24, 36) has not been detected. This
happens some times because some other features can take higher priority
in matching algorithm or because the feature uncertainty is bigger than the
threshold. The other type of bad estimation is lines that are appeared in the
middle of the map and in the top and the circle in point (29, 49.5). Such bad
estimation happens because of segmentation algorithm error. This kind of
error is avoidable by decreasing the noise reduction constant in segmentation
algorithm C
0
.
The lines in the bottom of the map and those are in the left are the lines
represent two of the walls in the environment. Its obvious that some of them
are summed together and other of them are transferred as they are. This
happen because of the threshold that used in matching algorithm. Where
these threshold calculate the euclidean distance between two matching lines.
52 Chapter 4. Results
Figure 4.16: The local map that is extracted from rst LRF at scan image number
71.
If the line norm angles of two line segments are located in the opposite
direction like 5
o
and 175
o
the result becomes as in the gure above. To solve
this problem its important to calculate the threshold of each norm length
and norm direction alone instead of calculating them together.
The bad line estimation in the top of the map is happen when the scan
points segment contain only 5 points; 2 from the end of one column and
3 from the beginning of the next columns scan or line scan. In such case
the estimation of the line will return a very low uncertainty. This cause
removing of some features from the map because the algorithm takes the
sum of diagonal of the uncertainty matrix to decide which feature have to
consider and which one have to remove where ever there is an intersection
between two features has appeared. One suggestion is to use prior knowledge
of the environment to tell the robot that this feature is high priority than
the other one (matching algorithm) before using the uncertainty. However,
such suggestion had been not tested.
Table 4.10 just gives a numerical results of paper reel estimation in g-
ure 4.18 and there uncertainty and the error of estimation in meter according
to the real position in the environment. These paper reels are the same paper
reels appear in table 4.9. Comparing the results in the two table together
shows better uncertainty but also dierence in position error.
Method for Autonomous Picking of Paper Reels 53
Figure 4.17: The local map that is ex-
tracted from second LRF at scan im-
age number 71.
Figure 4.18: The result of building the global map of combining 112 local map,
56 local map from each LRF are built from 56 scan image over 56 position of the
truck.
54 Chapter 4. Results
4.4 Map analysis
The predened map is processed to ned the most suitable arrangement for
picking and the best direction of the picking. The main procedure followed
the procedure that mentioned in section 3.4. Then according to the location
of the robot the predened map is processed. The rst paper reel is the
nearest available one to the robot.
Figure 4.19: The simulation environment and the initial position of the truck.
Light circles are concrete pillars. Dark circles are paper reels. Lines are walls and
container.
The simulation environment is same as the environment above. The only
dierent is the assumption of the initial location of the robot. Figure 4.19
shows a map of the environment and the initial position of the truck. The
assumption of the truck location is built upon the believe that the truck
should set somewhere near the container location (door) because its more
convenient and more money consuming. The paper reels arrangement is
given as in gure 4.20.
Calculating the availability of any paper reel has been done as in same
as section 3.4 above. First, consider any object that locates on a distance
less than 1.5 times the total length of the truck from the target paper reel
Method for Autonomous Picking of Paper Reels 55
Figure 4.20: The paper
reels arrangements.
Figure 4.21: All
papers are pro-
cessed then the
access directions
for each paper reel
are determined
(the magenta
points).
as obstacle. Then apply the equations 3.42 and 3.43 for circular obstacle
and equations 3.40 and 3.41 for linear obstacles. The paper reel that has the
less number of inaccessible directions is the most available paper reels. For
calculating the nearest paper reel we considered the straight distance from
forklift initial position to the paper reel. We assumed that the container is
the destination that paper reels have to be moved to. Its more convenient
to call it the loading area. The new distance is the straight distance from
the paper reel to the center of the loading area. Practically we think that
this assumption is acceptable because the mission of the truck will be to
transfer this stack of paper reels from its current location (unloading are)
to the new location (loading area). If the loading area is empty (as its
expectable) then the center point of the loading area will be the mean of the
total travelling distance for truck mission inside the loading area (container).
The nearest reel then is the reel with the minimum summation of distances
(Truck-reel/reel-loading area). Figures 4.21 to 4.26 show the procedure how
is done.
56 Chapter 4. Results
Figure 4.22: The
paper reel #13 is re-
moved rst then the
rest paper reel are
processed again.
Figure 4.23: Paper
reel number 12 is
removed this time,
then the map pro-
cessed again.
First, in gure 4.21 the availability (accessibility directions) are calcu-
lated. The magenta dots on each paper reels show the accessibility direction
for each one. Then the nearest one extracted to external matrix taking the
highest priority to be moved. Then its place becomes empty which give ac-
cessibility toward the next paper reel gure 4.22. Again the accessibility
are calculated then the nearest reel is chosen then moved and so on until the
whole stake is processed. However, when we use the word moved, or removed
in this level we do not mean moving the reel physically. Its only extracting
the reel from its original matrix to another one, where papers here arranged
according to there priority. This matrix is the result of this algorithm where
its return the priority, paper reel number from gure 4.20, the position of
the paper reel center, the paper reel diameter and all accessibility directions
for the reel
2
.
Note in gures 4.21 to 4.26 the procedure goes well. In gure 4.21 the
reels that have the highest number of accessibility directions is the reels in
the corners. See the reel number 26 in Figure (4.21) has a few number of
accessibility directions. While after removing paper reel number 13 (Figure
4.22) the number of accessibility directions increase by a remarkable way.
From another hand the paper reels in the lowest row (except the corners
2
We believe now that extracting this matrix in this level is not necessary, where the
next step can be done directly after each paper reel then the reel could be moved physically.
Method for Autonomous Picking of Paper Reels 57
Figure 4.24: Now paper reel
number 26 is removed. Note two
dierent and separate intervals
of accessibility direction on paper
reel # 11 are appeared
Figure 4.25: The same for pa-
per reel number 11.
Figure 4.26: And so on for
all other reels until all papers
are processed.
58 Chapter 4. Results
Figure 4.27: The
picture shows picking
intervals and access
direction.
Figure 4.28: The path is
generated by using b-spline
function and by giving 10 dif-
ferent points to the function
as input. The path calcu-
lated to each paper real from
each possible picking direc-
tion then the shortest one is
chosen.
and reel number 12) have only one accessibility direction because they all
are surrounded by each other. Paper reel number 12 which is surrounded by
other reels from 3 directions and a column from south direction -gure 4.19
(Column number 20 locates in a distance less than 1.5 times the robot length)
has no accessibility direction at rst (gure 4.21). Then after removing paper
reel number 13, the reel number 12 got more a suitable number of access
directions (gure 4.22) which gives it the priority to be removed (gure 4.23).
The access directions of paper reel number 12 are not in conict with the
column location or other paper reels locations. Other paper reels are treats
by the same way. The rest gures until 4.26 show the rst 10th paper reels
removing priority. Table 4.11 shows the result of this procedure.
After nding the best arrangement for moving paper reels (Column 2
in table 4.11), the step was to nd the most suitable direction among the
available to pick up the reel, i.e to nd the best interval among intervals in
the fth column of table 4.11. The best interval is shown in the sixth column
of the same table.
This step had been done by choosing the direction which insure the short-
Method for Autonomous Picking of Paper Reels 59
Figure 4.29: All possible
paths to the paper reel.
Figure 4.30: The chosen path is the
shortest one and the easiest from paths
in gure 4.29.
Figure 4.31: Selected paths and pick-
ing directions for each real.
60 Chapter 4. Results
Prio-
rity
P.
No.
Center
(x,y)
Radius Access directions Best
access
1 13 (37.2,50.9) 0.75 {95
o
:5
o
:180
o
} 100
o
2 12 (35.6,50.9) 0.75 {115:5:175} 120
3 26 (37.2,52.4) 0.75 {90:5:235} 90
4 11 (34.1,50.9) 0.75 90 & {130:5:175} 90
5 25 (35.6,52,4) 0.75 {110:5:175} 110
6 10 (32.5,50.9) 0.75 {90:5:105} & {145:5:175} 90
7 24 (34.1,52.4) 0.75 {90:5:180} 90
8 9 (31.0,50.9) 0.75 {90:5:175} 90
9 23 (32.5,52.4) 0.75 {90:5:160} 90
10 39 (37.2,55.3) 0.75 {115:5:220} 120
11 8 (29.4,50.9) 0.75 {90:5:175} 90
12 22 (31.0,52.4) 0.75 {90:5:160} 90
13 48 (33.3,53.9) 0.75 {90:5:145} 90
14 38 (35.6,55.6) 0.75 {195:5:220} 200
15 7 (27.9,50.9) 0.75 {90:5:175} 100
16 47 (31.8,53.9) 0.75 {90:5:155} 90
17 21 (29.4,52.4) 0.75 {90:5:160} 90
18 37 (34.1,55.3) 0.75 {90:5:100} & {190:5:220} 90
19 6 (26.3,50.9) 0.75 {90:5:175} 100
20 46 (30.2,53.9) 0.75 {90:5:160} 90
21 20 (27.9,52.4) 0.75 {90:5:160} 100
22 36 (32.5,55.3) 0.75 {90:5:130} & {190:5:220} 90
23 5 (24.8,50.9) 0.75 {100:5:175} 100
24 45 (28.7,53.9) 0.75 {90:5:165} 90
25 35 (31.0,55.3) 0.75 {90:5:145} & {185:5:220} 90
26 19 (26.3,52.4) 0.75 {120:5:160} 100
27 4 (23.2,50.9) 0.75 {120:5:175} 120
28 34 (29.4,55.3) 0.75 {90:5:220} 90
29 44 (27.1,53.9) 0.75 {90:5:165} 100
Table 4.11: The shape of the table that generated from the algorithms described
in this research from analysing a given map for paper locations. This is only a
sample from the simulation environment above.
est distance from the robot location to the paper reel coming from that di-
rection. Figure 4.28 shows an example of the implementation.
The main problem that we can address here that we could not update
the path on-line, i.e. the path should generated before any movement that
could be done by the truck. Figure 4.29 shows all possible paths that reach
the paper reels from all access directions. Its obvious that the lowest path is
the shortest one. Its the path with heading direction (100
o
). So this is the
selected direction as in gure 4.30. After determining the picking direction
all other picking directions are removed and the chosen one is extracted to
ordering table as in table 4.11. Then the procedure continue for the rest paper
reels in the stack. Figure 4.31 shows the result of applying this algorithm for
all paper reel and according to the path that chosen in gure 4.28.
Figure 4.29 shows the generated path to each picking available picking
direction. This gure is a zoom to Figure 4.28. It is obvious that each path
Method for Autonomous Picking of Paper Reels 61
Figure 4.32: The chosen picking direc-
tion provide safe moving for truck to not
be in conict with any other obstacle
Figure 4.33: Also the selected path pro-
vide safe reaching to the paper reel.
has its own length. The shortest path of them is chosen to be a guide to the
suitable piking direction as in Figure 4.30. From Figure 4.30 we can note how
the picking direction and the path to that direction are chosen to insure a
safe manoeuvring for the truck (Figure 4.32). Figure 4.31 shows all suitable
Figure 4.34: However, we could not simulate the return back movement where the
rotation of truck direction happen instantaneously not exactly as in real life.
picking direction for all paper reels in stack. One should note that the piking
direction and the path are done according to the sequential transferring for
paper reels. This means if there is any intersection between the path and any
other paper reel, which happen a lot in the gure, it can not be considered as
problem because the intersected paper reel will be removed in the time of the
time of path use. Figure 4.33 shows how the robot handling the paper reels.
From the picture note how the robot can reach its target paper reel from
62 Chapter 4. Results
the best picking direction by a safe way. Figure 4.34 shows a small problem
appears due to the lake of simulation controller behaviour in the robot. In
this picture and because the heading of the robot should changed by 180
o
the robot rotates locally. We did not solve such problem but we believe if
the controller has been simulated we can avoid such problems.
Chapter 5
Discussion
In the result chapter we have described our results and we have addressed
the main problems that we have faced. Our work focus on two main prob-
lems, building a map from a laser range nder scan image and analysing
that map or any provided map with any arrangement of paper reels to nd
the suitable arrangement for picking and the best direction to pick up each
paper reel. Although, we have tested our algorithms and evaluate it with
only one arrangement of paper reels we believe its able to work on several
arrangements.
The building map algorithm is an algorithm of order O(N
2
) where N
is the number of features in the map. We are lucky because the number of
features are small in our environment. However, this is not the case in reality.
The algorithm itself is accumulative algorithm, which means the number of
features in the map increases with every new scan. So later on after several
scans the order of algorithms becomes O(N.M) where N is the number of
the features in the map which becomes in the knowledge base and M is the
number of the features from the new scan, compare gures 4.18 with 4.14.
While we believe that our algorithm can give an accurate estimation for
locations of paper reels and other feature due to its accumulative behaviour
we nd it also add a lot of unwanted features to the map gure 4.18 for the
same reason. During feature extraction some features are extracted totally
mistaken (which mean they do not exist).
This happens once each extraction process or once each two extraction
process. However, they remain in the nal map see gure 4.7. They stay
in the nal map because in the summation algorithm we add each nearest
features together. We believed that the rest will be only the paper reels so
the algorithm transferred them as they are.
One solution could be that at each update step the matcher should be
able to save three previous sequential maps then by comparing current map
63
64 Chapter 5. Discussion
with its three previous maps, knowledge base map and the predened ware-
house map that is provided to the robot. If the feature is not one of the
warehouse structures and its not appear in three sequential scan then the
matcher algorithm should remove it.
In gure 4.18, which is an example of the built map, almost of warehouse
walls are ignored and replaced with circles instead. The circle in the place
of walls appear once in ten extracting step as an bad tting for some scan
points, where in some cases the segment length is not long enough to reect
the real normal distribution of the error. For example if the length of the
segment is only ve or seven points and these points could shape a circle
then the circle appears while the reality is that points are a part of a line.
In the rst we have believed that such problem will not appear. So we
made the algorithm to omit the line wherever a line and a circle intersects.
However, after that we tried to over come this problem by comparing the
some of the diagonal of the uncertainty matrix. The surprising part was
even in such solution the diagonal of circles uncertainty was less than the
diagonal of lines uncertainty. The solution of this problem in our believe
now is to x the lines that t walls in matching step in the knowledge base
and when ever a new bad estimation happen and a circle appear the circle
should be removed.
The algorithm of analysing the map to nd the best arrangement of the
paper reel picking is from the order of O(N
2
.M) where is N is the number
of paper reel in the map and M is the number of the features in the map.
Again in computer science such algorithm could be not preferred. However,
we are dealing all time with limited number of paper reels and not a very
huge number of features. So its speed is acceptable.
We assumed that the truck able to build the map or the map that is
the input of this algorithm is giving a full view of the paper reels situations.
Which means the map is reect the surrounding environment of the paper
reel. To built such map the truck should rotate around whale the stack and
build it. This is not the required case in reality where only the truck able to
see the reel from one direction. We suggest to use only 180
o
from availability
instead of the whale 360
o
that we have used here. This is the face of the
paper reel which is faced the truck during its motion.
Finally, we can say, there is dierent step able to done to update this
research. However, according to our assumptions the algorithm of evaluating
paper reels to nd the best arrangement is robust.
Chapter 6
Conclusions
In this research we made a simulation environment to test several methods
and evaluate them on arrangements of paper reels. We used two dierent
laser scanner to generate a local and then global map. A method of com-
bining several local maps in one big global map is proposed and have been
tested. Another method for nding the best paper reel to pick is proposed
and a method to select the picking direction too. The result was sucient
in this aspect. However, the algorithm had been not tested in a real-time
environment. We hope to test it in future.
We aimed in this research to discriminate between reels and concrete
columns if they have the same shape and radius and the concrete column
was inside the paper reels stack. We had not get good results yet, but we
hope we will do that soon. We did that by combining two maps, the map that
built by our algorithm and our a predened one. Our algorithm of generating
global map from accumulating several local map has a lot of errors and bad
estimations. However, the columns that are detected and built are accurate
in a sucient way.
We generate paths depend on b-spline function. At some levels we want
to update the path to follow new updates to the map. We are doing that
o-line not on line, where the robot have to stop then generate a new path
to continue.
We came up with algorithm to generate a table of predened map con-
taining the order of reels, the direction of picking each reel, where the map
of the paper reels are built by our algorithm.
We test several segmentation algorithms. Most of segmentation algo-
rithms tries to solve the problem of detecting linear segments. This cause
dierent drawbacks when using them in an environment containing several
kind of feature shapes.
Finally the Matlab script that we are built is easy to use. All constants
65
66 Chapter 6. Conclusions
and variables are arranged by a way to be easy to reach and modify. The main
thing in the script is providing it with a text le describe the environment
and the arrangements that needed.
Bibliography
[1] Bouguerra Abdelbaki, Andreasson Henrik, Lilienthal Achim J.,

Astrand
Bj orn, and Rognvaldsson Thorsteinn. Malta : a system of multiple
autonomous trucks for load transportation. In Proceedings of the 4th
European conference on mobile robots (ECMR).
[2] A. Bouguerra, H. Andreasson, A.J. Lilienthal, B. Astrand, and T. Rogn-
valdsson. An autonomous robotic system for load transportation. In
Emerging Technologies Factory Automation, 2009. ETFA 2009. IEEE
Conference on, pages 1 4, sept. 2009.
[3] N. Chernov and C. Lesort. Least squares tting of circles. Journal of
Mathematical Imaging and Vision, 23:239252, 2005. 10.1007/s10851-
005-0482-8.
[4] N. Chernov and C. Lesort. Fitting ellipses, circles, and lines by least
squares. www.math.uab.edu/ chernov/cl/MATLABcircle.html, 2008.
Department of Mathematics University of Alabama at Birmingham
Birmingham AL 35294 USA.
[5] G. Garibott, S. Masciangelo, M. Ilic, and P. Bassino. Robolift: a vision
guided autonomous fork-lift for pallet handling. In Intelligent Robots and
Systems 96, IROS 96, Proceedings of the 1996 IEEE/RSJ International
Conference on, volume 2, pages 656 663 vol.2, nov 1996.
[6] D. Lecking, O. Wulf, and B. Wagner. Variable pallet pick-up for au-
tomatic guided vehicles in industrial environments. In Emerging Tech-
nologies and Factory Automation, 2006. ETFA 06. IEEE Conference
on, pages 1169 1174, sept. 2006.
[7] J.P. Liu. Case study of picking method selection for cosmetic broken-
case picking operation. In Industrial Engineering and Engineering Man-
agement, 2007 IEEE International Conference on, pages 357 361, dec.
2007.
67
68 Bibliography
[8] B-spline. http://en.wikipedia.org/wiki/B-spline, 2011.
[9] Belt problem. http://en.wikipedia.org/wiki/Belt problem, 2011.
[10] P. Nuez, R. Vazquez-Martin, A. Bandera, and F. Sandoval. An algorithm
for tting 2-d data on the circle: Applications to mobile robotics. Signal
Processing Letters, IEEE, 15:127 130, 2008.
[11] Cristiano Premebida and Urbano Nunes. Segmentation and geometric
primitives extraction from 2d laser range data for mobile robot applica-
tions. Science And Technology, pages 1725, 2005.
[12] J. Ryde and Huosheng Hu. Laser based simultaneous mutual localisation
for multiple mobile robots. In Mechatronics and Automation, 2005 IEEE
International Conference, volume 1, pages 404 409 Vol. 1, july-1 aug.
2005.
[13] Roland Siegwart and Illah R. Nourbakhsh. Introduction to Autonomous
Mobile Robots. A Bradford Book, 1 edition, 2004.
[14] Luka Tesli, Gregor Klanar, and Igor krjanc. Simulation of a mobile
robot with an lrf in a 2d environment and map building. In Krzysztof
Kozlowski, editor, Robot Motion and Control 2007, volume 360 of Lec-
ture Notes in Control and Information Sciences, pages 239246. Springer
Berlin / Heidelberg, 2007. 10.1007/978-1-84628-974-3 21.
[15] J. Xavier, M. Pacheco, D. Castro, A. Ruano, and U. Nunes. Fast line,
arc/circle and leg detection from laser scan data in a player driver. In
Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005
IEEE International Conference on, pages 3930 3935, april 2005.
[16] Xu Zezhong, Liu Jilin, and Xiang Zhiyu. Map building and localization
using 2d range scanner. In Computational Intelligence in Robotics and
Automation, 2003. Proceedings. 2003 IEEE International Symposium
on, volume 2, pages 848 853 vol.2, july 2003.
Appendix
Kinematic example
Figure 6.1: Ackerman Steering Principle.
The bicycle kinematic model that appear in Figure 3.1 is a simplicity of
Figure 6.1. with assumption that there are no slip in wheels motion.
Let be the steering angle of the right rear wheel and is the steering
angle of the left rear wheel. The simplication model of these two wheels
as in Figure 3.1 is the wheel in the center point of the line between the two
wheels. The steering angle of this wheel is . For simplicity we assumed that
the new steering angle is the mean of the two Oregonian angles.

( + )
2
(6.1)
Let X and Y be the x and y position of the robot in the global coordinate and
let be the heading of the robot in the same coordinate. Then the position
69
70 Bibliography
of the global coordinate is
P =
_
X, Y,

(6.2)
When the robot rotate around its center of rotation ICR by a constant speed
the change in location P will be
P = Pcos (6.3)
P = v dT cos (6.4)
For the rear wheel the moved distance in the the same time dT will be
S = vt dT (6.5)
where vt is the tangential velocity at the center of the rear wheel as in the
left top corner in Figure 3.1. Also we know that
= dT (6.6)
where is the angular velocity of the robot around its inertial center of
rotation. From the relation between the angular velocity and the linear
velocity we can say:
=
vt
d
(6.7)
where d is the radius of the circular motion of the rear wheel to the ICR.
From Figure 3.1we can say that
d =
L
sin
(6.8)
where L is the distance between the rear and front wheels. Then by substitute
6.8 and 6.7 in equation 6.6 we will get the following equation
= vt
sin
L
dT (6.9)
From our knowledge we know that the tangential velocity vt is
vt = v cos (6.10)
then the result will be the change in the heading as follows
=
v
L
cos sin dT =
v
2L
dT sin(2) (6.11)
Method for Autonomous Picking of Paper Reels 71
From equation 6.4we can derive the change in robot position as follows
X = v cos dT cos( + )
Y = v cos dT sin( + ) (6.12)
If the sampling period dT is small enough -say 0.1 sec. for example- the term
cos() 1, which means simply, we can neglect it from our calculations.
Then the simple form from the equation 6.12could be in the form
X = v dT cos( + )
Y = v dT sin( + ) (6.13)
X = v dT cos( +
v
2L
dT sin(2))
Y = v dT sin( +
v
2L
dT sin(2)) (6.14)
However, neglecting the cosine term from calculations would produce sys-
tematic error so its not recommended to neglect it. The simple form from
equation 6.12which could be easier to use in calculations is as in equations
3.2.
Availability calculation example
Figure 6.2: The paper reel in the middle is located between a wall a column.
72 Bibliography
This example is to understand the method used in calculating the avail-
ability angles. Let the paper reel in Figure 6.2 locates in a distance of
p w d = 208cm of a wall and on a distance of p c d = 348cm from an-
other paper reel or concrete pillar. Let the truck length is 370cm and the
grabber width is 5cm. Let the angle of the center line between the paper
reel and the circle obstacle is = 210
o
and the angle of the wall norm is
= 90
o
. Let the radius of the paper reel is r
1
= 90cm and the radius of
the column is r
2
= 95cm. Finally let the picking interval is a = 5
o
Its obvious that the both of the wall and the column is considered as
obstacle because the distance from any of them to the paper reel is less
than one and half time the truck total length. So its necessary to calculate
the availability from them. Start by calculating picking direction from each
obstacle alone. By using equations 3.40 and 3.41 we calculate the availability
from wall. So from equation 3.40 the minimum picking angle is
theat
1 min
= floor(
mod(270 90 + sin
1
(
2089010
370+10
), 360)
5
) 5 = 195
o
(6.15)
and the maximum piking direction from equation 3.41 is
theat
1 max
= ceil(
mod(270 + 90 sin
1
(
2089010
370+10
), 360)
5
) 5 = 345
o
(6.16)
So availability are all directions
{195 : 5 : 345}
which means
{0 : 5 : 195} and {345 : 5 : 360} (6.17)
No calculate availability from the column by using equations 3.42, 3.43
and 3.44. The angle is calculated as in equation 3.44.
= sin
1
(
95 + 90 + 10
348
) = 34
o
(6.18)
The minimum picking direction is as equation 3.42
theat
2 min
= floor(
mod(180 + , 360)
5
) 5 = 355
o
(6.19)
and the maximum picking angle from the column is calculated by equation
3.43 as follows:
theat
2 max
= ceil(
mod(180 + 210 + 34, 360)
5
) 5 = 65
o
(6.20)
Method for Autonomous Picking of Paper Reels 73
and the availability are direction from theat
2 min
to theat
2 max
clockwise
which mean all directions
{355 : 5 : 65} (6.21)
Then after converting the two availability to linear representation then
by adding both as in gure 6.3 we will get the nal true availability to be as
in the third line in the gure which is
{65 : 5 : 195} and {355 : 5 : 360} (6.22)
Figure 6.3: Adding two availability together after converting the angles to linear
representation to get the total available directions.

Anda mungkin juga menyukai