Petter Johansson
http://www.design.lth.se
ii
Abstract
The purpose of the project was to design a robot cell calibration method built
on simple laser sensing methods. The main reason for this was to make it
easier for small and medium sized enterprises to use robots when producing in
short series.
Dierent working principles for the laser and the sensor were explored and a
laser-sensor device similar to a bar-code reader, were the intensity of reected
light is measured, was chosen. The laser beam tracks a xed black square
printed on a normal paper from dierent directions to obtain the unknown
coordinate system by calculating the intersection of the dierent beam paths.
A simulation model was built in the Matlab environment Simulink, where
a controller program was developed and tested. The controller program was
then ported to a robotic programing language and successfully run on a real
industrial robot.
Matlab routines for analysing the measured data were also implemented.
Contents
1 Introduction 1
1.1 Purpose of project . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Background 2
2.1 Accuracy and repeatability . . . . . . . . . . . . . . . . . . . . 2
2.2 Oine programming . . . . . . . . . . . . . . . . . . . . . . . . 2
2.3 Objective of calibration . . . . . . . . . . . . . . . . . . . . . . 2
2.4 Previous methods . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Laser and Transducers 6
3.1 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Laser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Photodiode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4 Concepts 10
4.1 Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Proposals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5 Kinematics 16
5.1 Frame description . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6 Simulation 18
6.1 Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.2 Sensor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.3 Robot model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.4 Controller model . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.5 Saver model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.6 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7 Implementation 28
7.1 Laser and sensor . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.2 Paper pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.4 Data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
iv
8 Results 36
8.1 Search pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.2 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
9 Conclusions 38
9.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
9.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
9.3 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
9.4 Future development . . . . . . . . . . . . . . . . . . . . . . . . 39
A Simulation models 45
A.1 Sensor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
A.2 Robot model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
A.3 Controller model . . . . . . . . . . . . . . . . . . . . . . . . . . 49
B Implementation 53
B.1 Adapter drawing . . . . . . . . . . . . . . . . . . . . . . . . . . 53
B.2 Rapid code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
B.3 Data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
C Results 71
C.1 3D plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
v
Chapter 1
Introduction
[SMErobot, 2006]
1
Chapter 2
Background
2
2.3. Ob jective of calibration 3
responsibility of the robot manufacturer and is thus not treated in this project.
Calibration of the robot TCP(tool center point) is very important since it
has direct impact on the accuracy. The calibration has to be redone for every
new tool or when the tool for some reason is damaged. ABB has an automatic
solution called Bullseye [ABB, 2006] that measure the TCP of a welding gun
with a light source/transducer. TCP calibration is however not treated in this
project due to delimitations.
Calibration of static relations between objects in the robot cell, such as robot
- cell origin, robot - workpiece/xture, robot - robot or robot - machine, is
important if the workcell is exible and cell setups often change due to change
in production. This is the main focus of this project.
Position error
In a simulation layout a xture might be placed at X = 5, Y = 5. But it turns
out that in the real layout the actual xture is placed at X = 6, Y = 6. The
rst solution to this problem one thinks of is to measure the correct position,
update the oine model and download a new program to the robot controller.
This adjustment might work in a one robot cell.
Assume instead that the cell contain two robots and one xture. Viewing
the rst robot as the world origin, the second robot is placed in the wrong
position compared to the oine model. If the position of the second robot
is updated according to the measurements, the relations between the second
robot and the xture has to be updated (see gure 2.1) [Seyfarth, 2006].
Robot 2 Robot 2
Model Actual
0,2,0 R2F1 0.5,2,0
Actual
R2F1 R2F1
Model Model
Fixture
Fixture
Actual
Modell
0,1,0
R1F1
Actual
Robot 1
World Model
0,0,0 0,0,0
Manual teach in
The most common way of calibrating a workcell has been to use the robot as
measuring device and an operator as director with feedback. This is similar to
the teach in methods used for oine programming and thus very timeconsum-
ing [Seyfarth, 2006].
CCD/PSD - Sensor
∆z
Lense
∆Ζ
Object
Placing the device as the robot tool and measuring dierent points, this tool
is used to identify the position of rectangular plates that specify the dierent
frames of interest. The method is interesting in that the plates are cheap and
easily attached to any object, but the cost of a high precision triangulation
device might be prohibitive. A laser triangulating device from the Keyence
LK-series starts at 40000 SEK [Provicon, 2006].
Chapter 3
3.1 Physics
Stimulated emission
Atoms has the ability to emit and to absorb radiation energy such as visible
light. When light is absorbed, the atom is excited into a higher energy state.
If the atom is left at this excited high energy state, it will sooner or later re-
emit the light and thus return to a lower state. Even though this spontaneous
emission can not be predicted for a single atom, it is possible to calculate the
mean lifetime of the state.
Stimulated emission occur when the excited atom is exposed to radiation of
a specic frequency. If the radiation has the same frequency as the one the
atom is about to emit, the radiation will be emitted earlier. The emitted radi-
ation will have the same direction, phase and polarisation as the stimulating
radiation. This way the radiation beam can be amplied. But in equilibrium,
almost all atoms are at the lower energy level and thus will the photons be
absorbed and the radiation damped.
In order for the amplication to occur there has to be an inverted population,
i.e there has to be more atoms in the excited state than in the low energy state
[Jönsson, 2002]
Semiconductors
Semiconductors are materials that can be both conducting and insulating de-
pending on some conditions. Only electrons with an energy level above the
Fermi energy Ef may be acted on and accelerated in the presence of an elec-
tric eld.
In a solid material the electrons of the atoms are acted on by the electrons
and nuclei of adjacent atoms in such a way that the available energy states are
split into electron energy bands (see gure 3.1). Between the energy bands there
may exist energy band gaps. Energy levels within these gaps are not available
for the electrons that participate in the covalent bonds. In a semiconducting
material such as silicon there exists such a gap between the lled valence band
and the empty conduction band. The Fermi energy of pure silicon lies near
the centre of this band gap. In order for an electron to be excited into the
6
3.1. Physics 7
unfilled bands
conduction band
band gap
Energy
valence band
filled bands
n-layer
In the n-layer impurities with one extra valence electron are added. If for ex-
ample phosphorus with ve valence electrons is added into a matrix of silicon
with four valence electrons, the extra electron will not participate in the co-
valent bond. Instead there exists one energy state for that electron within the
energy band gap close to the conduction band called the donor state. Here is
the Fermi energy also closer to the conduction band. But the thermal energy
available at room temperature is sucient to excite the electron from the donor
state into the conducting band thus creating an excess of free electrons in the
n-layer without leaving a hole in the valence band.
p-layer
The p-layer works the opposite way. Here impurities with one valence electron
less are added. For an example boron with three valence electrons could be
added into the silicon matrix. This kind of impurity atom introduce an energy
level just above the valence band within the band gap called an acceptor state.
The thermal energy present at room temperature will excite the electron from
the valence band into the acceptor state, thus creating an excess of free holes
in the valence band without leaving any new electrons in the conduction band.
By joining n-doped and p-doped layers in dierent ways components such
as diodes, transistors and microchips can be made. [Callister, 2005]
8 Chapter 3. Laser and Transducers
3.2 Laser
The Laser, Light Amplication by Stimulated Emission of Radiation, emits
photons in a coherent beam. It consists of an active medium, placed in an
optical cavity i.e. between two mirrors. An inverted energy population is
created by pumping energy into the medium. The spontaneously emitted light
is then reected by the mirrors and amplied by the medium thus creating
a standing wave. By having one of the mirrors partially transparent a laser
beam is formed.
The rst
The ruby laser (see gure 3.2), the rst working laser type, was invented by
Theodore Maiman in 1960. The active medium consisted of the gemstone ruby
which is chromium embedded in a matrix of aluminium oxide. Both ends of
the ruby where polished and silvered, energy was pumped into it by a spiral
formed ashing lamp.
Since then other lasers such as YAG, gas and semiconductor lasers has been
developed.
the electrons is emitted as light. While current is kept low this component is
known as a LED, light emitting diode. Increasing the current over a dened
limit creates an inverted population since the electrons are injected faster than
the holes can receive them. By splitting and polishing the semiconducting
crystal, it becomes an optical cavity where the light is amplied, thus creating
a standing wave. The light emitted is coherent, but due to diraction it is
quite divergent and therefore needs lenses to form a round straight beam.
The semiconductor laser has been developed to supply the needs of optic
bre communication and optic information storage such as the CD and the
DVD. This development has lead to small and inexpensive lasers that are now
commonly available. [Jönsson, 2002]
3.3 Photodiode
A photodiode can be used to reconvert a light signal into an electric signal.
Similar to a laserdiode, a photodiode consists of an p-n junction semiconductor.
When light hits a photodiode and the energy absorbed is greater than the band
gap energy Eg , electrons from the valence band in both the n-layer, the p-
layer and the depletion layer in between are exited into the conduction band,
leaving holes in the valence band. Due to the electric eld in the depletion
layer, electrons are accelerated towards the n-layer and holes towards the p-
layer thus building up a positive charge in the p-layer and a negative charge in
the n-layer. With an external circuit connected, a current will ow from the
p-layer to the n-layer.
The spectral response and the frequency response of the photodiode are con-
trolled by the thickness of the layers and the doping concentrations. [Hama-
matsu, 2006]
Chapter 4
Concepts
4.1 Intersection
Two lines
If the sensor system is able to locate the direction from a known point to the
point that is to be identied, both can be viewed as being on a known line.
Two lines can either be parallel, skew or intersecting in one point [Sparr, 1994].
If directional measurements can be made from two known points, the location
of the unknown can be calculated (see gure 4.1).
Three planes
A sensor system identifying a plane where the unknown point is located can
in a similar way, as with the line detector, be used to identify the location of a
point. Two planes can either be parallel or intersecting in one line. If a third
plane is not parallel to any of the other two non parallel planes, all three planes
will intersect in one point [Sparr, 1994]. If this kind of plane measurements
can be made from three known points, the location of the unknown can be
calculated (see gure 4.2).
10
4.2. Proposals 11
Transducer mounting
Mounting a laser beam in an exact intended direction with reference to the
wrist of a robot might be a challenging task. The problems will occur since
even small directional deviations of the beam in one end will lead to large
distance deviations at the other end. A deviation of 1◦ will make the beam
deviate 5 mm at a point 300 mm from the origin (see equation 4.1).
4.2 Proposals
Single photodiode
A single photodiode denes a point of interest. The wrist mounted laser points
at the photodiode and an acknowledge signal is generated when the beam hits
it. With beams coming from dierent directions, one single unambiguous point
12 Chapter 4. Concepts
might be dicult to dene since the light might refract dierently depending
on the incident angle.
By lowering the photodiode into a cone (see g 4.3), the small hole at the
tip of the cone could dene the point whereas the incident angle becomes less
important. Any light that enters the cone, enters through this point and is
detected by the photodiode that constitute the oor of the cone. Three cones
are milled in one at piece of metal for each coordinate system, in order to
retain correct relations between the axis. Since the photodiode will determine
the total amount of light entering, the interference eects might not be a
problem.
Even though this setting can tell that the beam points in the correct di-
rection, it might be dicult to nd the hole because the number of points in
space is large. Searching through all points might take a lot of time even if the
point is almost known and the space thus is limited.
If a lens that spreads the beam in a plane [ELFA, 2006] is attached to the
laser it will be easier to align the point with the plane, but as discussed in
section 4.1 more measurements must be made.
Photodiode array
The properties of semiconducting materials makes it possible to integrate sev-
eral photodiodes into arrays or matrices. The arrays are normally used in
spectrometers, linear encoders or for laser alignment in CD players. Also ma-
trices tailored for laser beam alignment are available [Hamamatsu, 2006]. A
similar matrix is the CCD element, that capture the image in a digital camera,
which contain more photodiode elements than the previous.
A matrix can be used as feedback, directing the beam in two dimensions into
the centre of the matrix. With a beam plane in combination with an array,
control in one dimension would be enough, letting the plane sweep over the
array more or less perpendicularly (see gure 4.4).
4.2. Proposals 13
Bar-code reader
Pen type bar-code readers consists of a light source and a photodiode placed
next to each other. The photodiode is selected such that it has the greatest
sensitivity at the same frequency as the light source emits. The photodiode
14 Chapter 4. Concepts
measures the intensity of the reected light as the pen is swept over the bar-
code (see gure 4.5). Black areas absorb most of the light whereas white areas
reect much of the light. The bar-code information is thus transformed into
a binary signal. Dierent widths of the lines make it possible to encode quite
much information. Drawbacks with the pen reader is that it has to be swept
with constant speed in close proximity of the bar-code.
Laser scanners use in a similar way a laser beam as the light source. Here
the user does not have to sweep the scanner since the beam is directed back
and fourth with a reciprocating mirror or a rotating prism [TALtech, 2006].
The well directed light beam from the laser also makes it possible to read
barcodes from a distance. A few beam paths in dierent directions gives the
user freedom to scan the barcode of an object without having to care about
the direction of the scanner. [Netto, 2006]
A hybrid between the pen reader and the laser scanner, with a xed laser
beam mounted on a robot, could be used to identify some pattern printed on a
paper leading to the identication of the unknown coordinate system. Ideally
the robot will follow a line or a curve on the paper to a point (e.g. a corner)
while the coordinates and the directions of the robot wrist will be saved for
later calculations. This will be repeated a sucient amount of times from
dierent directions to gain full information about the coordinate system. This
method will probably be able to trace the coordinate systems from an arbitrary
distance and thus speeding up the measurements.
4.3 Selection
Criterias
To select one of the proposed methods some evaluation criterias where set up.
Evaluation
The proposed laser and sensor combinations where graded on a scale from 1 to 5
where 5 represented the most desirable properties of that criterion. The grades
4.3. Selection 15
from the dierent criterions where summed without weighting and compared
to the average of all the sums.
Coordinate system Robot wrist 1 2 3 4 Sum
single photodiode in a cone laser beam 3 5 4 2 14
single photodiode in a cone laser plane 3 3 4 3 13
photodiode array laser plane 3 3 3 3 12
photodiode matrix laser beam 2 5 2 3 12
laser beam single photodiode in a cone 2 3 2 4 11
laser beam photodiode matrix 2 2 2 4 10
paper with printed patterns xed laser bar-code reader 5 4 5 4 18
Average: 12.9
Conclusion
By analysing the evaluation table, one draws the conclusion that the bar-code
reader solution would be the best to proceed with.
Chapter 5
Kinematics
The position vector can also be used to describe translations from one position
to the other.
Rotation matrix
A robot with six degrees of freedom can be oriented in any direction. There-
fore it is necessary in most applications to describe not only positions but also
orientations. A new frame of reference B is attached to the body whose orien-
tation is to be described. This new frame B is described as unit vectors A XB
A YB and A ZB in frame A. As these three vectors are joined they form the
rotation matrix.
r11 r12 r13
B
(5.2)
AR = A XB A YB A ZB = r21 r22 r23
r31 r32 r33
Transformation matrix
If a position is known in frame B as vector B p it is possible to describe it
in frame A by multiplying the vector with the rotation matrix B A R and then
16
5.1. Frame description 17
Ap =B
A RB p +A pBorigo (5.4)
This can be written more conveniently by rewriting the expression as
BR
Ap A pBorigo Bp
(5.5)
= A
1 1
0 0 0 1
Ap =B
A TB p (5.6)
Combined transformations
If the transformation matrix CB T is known, a position C p in frame C can be
written in reference to frame B
Bp =C
B TC p (5.7)
By combining equation (5.6) and equation (5.7), C p can be described directly
in reference to frame A.
B C
A p =A TB TC p (5.8)
This means that the compound matrix CA T can be written as the kinematic
chain 5.9.
C B C
A T =A TB T (5.9)
Inverted transformation
By using the rules dened in equation (5.3) it is easy to invert the transforma-
tion matrix
B RT −B T
A
=B −1 AR A pBorigo
(5.10)
A
BT A T =
0 0 0 1
[Olsson et al., 2005]
Chapter 6
Simulation
6.1 Simulink
Simulink is a toolbox within Matlab that is capable of modelling, simulating
and analysing dynamic systems. The systems can be both linear and nonlinear.
It is possible to work both by connecting ready made building blocks or writ-
ing own function blocks in dierent programming languages such as matlab.M,
C or C++ [MathWorks Inc., 2006].
(100 100)
black field [x,y]
2
Robot base
box
3
Mounting plate T1 x out
x fcn 3
T2
fcn Out
T3
T4 y y
4
intersection in plane white?
Sensor
1
1 x
Plane
2
y
Figure 6.1: Model of how the laserbeam interact with the plane
In the model world all transformation matrices are known. The plane input
(Input 1, gure 6.1) is the transformation matrix from the world origin to the
18
6.3. Robot model 19
plane where the unknown coordinate system is located. The x and y direction
of the transformation matrix together with the translational part span the
plane (see equation 6.1).
r114 r124 . . . px4
r214 r224 . . . py4
T4 =
r314 r324 . . . pz4 (6.1)
0 0 0 1
The other inputs (in gure 6.1) describe the kinematic chain from the world
origin to the laser sensor. As the laser beam coincide with the x direction of the
sensor coordinate system, the [r11; r21; r31] vector of the rotational matrix of
the chain together with the translational part denes a line (see equation 6.2).
r113 . . . . . . px3
r213 . . . . . . py3
T1 · T2 · T3 =
r313 . . . . . . pz3 (6.2)
0 0 0 1
When the laser beam points at the plane, the line and the plane intersect in one
point (see equation 6.3). The rst block in gure 6.1 contains a Matlab function
(see section A.1) that solves equation 6.3 for u, w and t. The unknown u and
w are the x respectively the y position where the line intersect with respect to
the plane origin.
px4 r114 r124 px3 r113
py4 + u · r214 + w · r224 = py3 + t · r213 (6.3)
pz4 r314 r324 pz3 r313
The next block utilise this x, y position to determine if the beam hits the plane
outside or inside the black square dened by (0, 0) and the box constant. The
output of the function is a boolean that is false if the beam is absorbed by a
black area and true if the beam is reected by a white area.
The x and y are in the reality unknown for the robot but they are made
outputs from the model for analysing purpose only.
1 rot
Rotate wrist_new
trans fcn 1
Wrist
translate along x,y,z (mm) wrist_old
move robot
2
Translate
z
Robot Delay
the initial condition in this delay defines initial wrist position and orientation
Figure 6.2: Model of how the robot react on rotate and translate commands
1
translate
Translate
1 In
In count_max rotate
fcn 2
pos_max
old new Rotate
100 control logic
3
N.o of steps State
until state switch
1
1
z
N.o of translations delay state, count, pos
State 0
In state 0 it is assumed that the measurements start with the beam pointing at
a spot somewhere below the corner of the black square. In the reality this will
be guaranteed by starting the search pattern below the assumed coordinate
system.
From here the TCP rotates stepwise in negative direction around the y axis
and in positive direction around the z axis until it reaches the black area where
it switches to state 1 (see gure 6.4)
State 1
In state 1, in the black area, the TCP rotates stepwise in positive y and in
negative z directions until it reaches the white area. In the white area it
rotates in both positive y and z directions until it reaches the black area again.
This way it continues until the sensor has been pointing at the white area for
count-max number of steps when it switches to state 2.
State 2
State 2 is similar to state 0 in that the laser beam starts far from the black
area. The beam moves by rotating the y axis in negative direction until it hits
the black area again and then it switches to state 3.
22 Chapter 6. Simulation
State 3
State 3 is similar to state 1, but instead of following the horizontal line the
beam follows the vertical line. In the black area, there is positive rotation
around y and negative around z . In the white area the rotation around y
is negative instead. As in state 1 the beam runs away until count switch to
state 4.
State 4
State 4 is similar to both state 0 and state 2. Instead of rotating back to the
black area the TCP translates stepwise in positive y and in positive z at half
the step size compared to the y direction.
When the beam reaches the black area again the pos counter is increased
and the state machine restarts from state 1. But the second time the program
ends after state 3 due to the pos counter.
6.6 Simulation
Connecting the dierent building blocks forming a feedback loop (see gure 6.6)
the controller completes the search pattern, following both a horizontal line and
a vertical line twice, in about 1300 time steps. In this model one time step
6.6. Simulation 23
symbolises the time it takes for the robot to move to the next set point. Fig-
ure 6.7 indicates how the beam sweeps over the plane in the x and y directions
respectively compared to time. Figure 6.7 also indicate how the sensor reacts
over time.
Rotating the black area is possible for angles up to at least ±π/6. The
corresponding search patterns are displayed in gure 6.8 and gure 6.9.
24 Chapter 6. Simulation
State = 0 In = 0
Black White
In = 1
In = 0
State = 1 In = 0
Black White
In = 1
State = 2 In = 0
Black White
In = 1
In = 0 In = 0
State = 3 In = 0
count_old > count_max
&&
pos_old >= pos_max
In = 1
State = 4 In = 0
Black White
In = 1
1 0 0 0
0 1 0 0 0 0 1 0
0 0 1 0 0 1 0 -25
0 0 0 1 -1 0 0 200
0 0 0 1 T4
world Tin
plane fcn Tout robot base
z robot wrist
0 Tout = Tin * rotz(z)
Rotates the black field around Z. set as initial condition
z in Robot/Robot Delay
Tout is in the reality one of
THE unknowns
1 0 0 0
0 1 0 0
0 0 1 0 robotbase
0 0 0 1
world XY Graph
robotbase
Plane x
Rotate Robot base y
Wrist <robotbase> Mounting plate
Translate wrist <wrist> Sensor Out
<sensor>
Sensor x - position
Robot
y - position
sensor
1 0 0 0
0 1 0 0
0 0 1 0 sensor
0 0 0 1
robot wrist
sensor
Translate
1
Rotate In
z
State
Controller Delay sensor
ctrl_state
<robotbase> Matrix
T
<wrist> Multiply
<sensor>
Saver
Product
Figure 6.6: The main simulink model, connecting the dierent submodels
26 Chapter 6. Simulation
Figure 6.7: Simulation results of how x, y and sensor output varies with time
Implementation
The realisation of the bar-code reader concept from section 4.2 needed four
things:
• a laser with integrated sensor
Make or buy?
While facing questions on how to optimise the optical, the mechanical and
the electrical properties of the device, research on the Internet disclosed three
similar devices from the same manufacturer.
Keyence LV Series
The LV Series from the Keyence corporation are sensors that can be used to
detect objects in a number of dierent situations. From this series there were
three dierent sensors that were suitable for this application [Provicon, 2006]:
• LV-H32 - adjustable beam spot (min Ø0.3 mm)
• LV-H35 - constant beam spot (Ø2 mm), coaxial laser and sensor
28
7.1. Laser and sensor 29
From these three, LV-H32 was chosen since it seemed to be the most exible
and thus the best suitable for the experimental setup.
In a production setup the LV-H37 might be a better selection since the
smaller beam will cause better repeatability. But since the precise location
of the objects in the robot cell are unknown on beforehand, the sensor with
shorter range would increase the collision risk between robot/sensor and the
other objects.
Amplier
With the sensor an amplier came (LV-21AP). The amplier lters the anal-
ogous signal from the sensor and outputs a digital signal on the black cable
depending on some trigger level. There is a number of dierent modes and
settings that can be adjusted. The most convenient feature is the automatic
tuning. By pressing the set button while passing the optical axis back and
forth over the edge, the trigger level will be adjusted to the midpoint between
the maximum and minimum light intensity detected (see gure 7.1). This fea-
ture can also be controlled by grounding the pink cable, hence is it possible to
automate the sensor tuning. This possibility was however not implemented in
the experimental setup, since it is easy to tune the sensor manually by pressing
the button. It is possible to interrupt the laser radiation by short circuit the
purple cable with the brown power cable, but this was also not implemented
in the current setup. [Keyence, 2006]
The price of the sensor and amplier was about 5500 SEK, which is about
15% of the price of the sensor described in section 2.4 [Provicon, 2006].
Mounting
Included with the sensor was also a general purpose mounting bracket (see
gure 7.2)
To be able to t the mounting bracket to the robot arm, an adaptor plate
was made in aluminium (see drawing in appendix B.1)
30 Chapter 7. Implementation
7.3 Robot
The sensor was mounted on a standard ABB IRB2400 robot. Control output A
from the sensor (black cable) was connected to the digital IO on the robot
controller called digIn1.
Rapid
Every robot manufacturer has developed at least one own programming lan-
guage, hence there exists several hundred dierent languages and dialects [Fre-
und et al., 2001].
The language developed for the ABB robots is called Rapid. The robot
program for the experiment was made by manually porting the control program
from the matlab simulation (see section 6.4) to Rapid without any major logical
changes.
In the robot program the search pattern starts relatively the current position
of the robot, to simplify the handling of the experiment. In a production
setup would this start position instead be programmed in some 3D simulation
7.4. Data processing 31
Extracting vectors
A matlab function, extract_vector.m, was written that lters and transforms
the data from the test into vectors corresponding to the known beam paths
(see appendix B.3). The lter program works in three steps:
1. The rows from the logle are extracted depending on the state, pos and
32 Chapter 7. Implementation
Mounting errors
The problem is however not that easy due to the laserbeam mounting errors
(see section 4.1). Instead the measurement data consist of lines that start at
given positions P , with the constant unknown mounting error dP , that points
in a given direction R, with the constant unknown error dR, at the unknown
but straight lines X0 + mx (i)Rx and X0 + mx (j)Ry (see g 7.4). This leads to
equation 7.1.
(P (i) + dP ) + t(i) (R(i) + dR) = X0 + m(i)Rx
(7.1)
(P (j) + dP ) + t(j) (R(j) + dR) = X0 + m(j)Ry
The two unknown lines, X0 + mx (i)Rx and X0 + mx (j)Ry are also known
to be perpendicular, which means that Rx ⊥ Ry . The dot product of two
perpendicular vectors is zero [Sparr, 1994]. Hence equation 7.2 pose more
constrains on the solution.
Rx · Ry = 0 (7.2)
Solving these equations states a nonlinear problem, since both m(i), Rx , m(j)
and Ry are unknown. One approach tried was using the nonlinear data-
tting method lsqnonlin in Matlab based on the Levenberg-Marquardt algo-
rithm [MathWorks Inc., 2006]. But this was not successful.
7.4. Data processing 33
i=3
(P(1) + dP) + t(1) *( R(1) + dR)) i=2
i=1
X 0 + m(i)*R x
Generalised solution
Instead of nding the exact solution a more generalised approach was used not
compensating for the mounting errors. Each plane was divided into two sets of
lines (see gure 7.5). The plane was assumed to start in point X0 = (x0 , y0 , z0 )
taken as the mean value of P and the two non parallel vectors spanning the
plane, R1 = (α1 , β1 , γ1 ) and R2 = (α2 , β2 , γ2 ), was taken as the mean values
of the two sets directional vectors respectively.
900
X0
850
800
750
700
650
600
R1 R2
550
990 995 1000 1005 1010 1015 1020 1025 1030
Computing the determinant gives the general equation of the plane (equa-
tion 7.4).
ax + by + cz + d = 0 (7.4)
where
a = β1 γ1 − β2 γ1
b = − α1 γ2 + α2 γ1
(7.5)
c = α1 β2 − α2 β1
d = − x0 a − y 0 b − z0 c
Computing equation 7.4 for each of the four measured planes gives an overde-
termined equation system (equation 7.6).
AX0 = −D (7.6)
where
a1 b1 c1
a2 b2 c2
A=
a3
(7.7)
b3 c3
a4 b4 c4
x0
X0 = y0
(7.8)
z0
d1
d2
D=
d3 (7.9)
d4
The location of the intersection between the four unknown planes is solved
with a least squares t. In Matlab that is done by typing
X0 = A\ − D (7.10)
The unit normal vector n = (nx , ny , nz ) for each plane is given by
a
nx = √
a2 +b2 +c2
ny = √ b
a2 +b2 +c2
(7.11)
c
nz = √
a2 +b2 +c2
p1
bx = − (7.15)
p2
then
mx X0 = bx (7.16)
gives the direction (Rx ) of the intersection (X0 +tRx ) as the negative nullspace
of mx .
Rx = −null(mx ) (7.17)
The y direction of the unknown coordinate system is then given in a similar
way as
Ry = −null(my ) (7.18)
where
my = [n3 , n4 ] (7.19)
[Weisstein, 2002]
In order for this method to be useful for nding X0 , Rx and Ry , it must be
assumed that the sensor mounting was calibrated on beforehand, even though
no methods for that where successfully developed during this project.
Matlab implementation
The generalised solution was implemented in Matlab (see appendix B.3). In
the Matlab program, the coordinate system is rst calculated for the data
collected when the optical axis made the transition from white to black, then
for the data from the black to white transition. Both calculations are plotted
in the same 3D graph to simplify comparisons.
Chapter 8
Results
850
800
750
700
650
600
550
720
700
680
1030
660 1010 1020
1000
990
980
640 970
960
950
36
8.2. Data analysis 37
Perpendicular directions?
Computing the dot product of the directional vectors for the black data gave
Rx · Ry = 0.0117 6= 0 (8.3)
and
Rx · Ry = 0.0026 6= 0 (8.4)
for the white data. Since none of the
Rx · Ry = 0 (8.5)
were the vectors not exactly perpendicular as they were supposed to.
It was seen from the 3D plots that the white and the black directions di-
verged, especially in the Ry direction.
Chapter 9
Conclusions
9.1 Simulation
Simulation is a powerful tool for a number o dierent problems, especially in
the eld of robotics control. Spending some time building models, might save
a lot of time in the implementation phase due to the possibility of "playing
around" with the models without the risk of destroying things.
9.2 Experiment
By this successful implementation, it has been shown that an accurate robot
in combination with some relatively low complex laser sensor system can be
used as a quite advanced measuring device.
38
9.4. Future development 39
By looking at the plots it seems like the positional measurements are the
most reliable compared to the information about the directions. This is consis-
tent with section 4.1 where the small error angle makes the error grow far from
the centre. One option is thus to use the measured data to dene one point
(X0 ) and then calculate the directions as the directions to other measured
points (X1 and X2 ) all dened on a at surface.
40
Bibliography 41
J.F. Quinet, Krypton France Calibration for oine programming purpose and
its expectations, Industrial Robot, Vol. 22 No. 3, 1995, pp. 9-14
Markus Seyfarth, SMErobot project no. 011838, Report on state of the art
calibration methods, 2006
42
List of Figures 43
B.1 The adapter between the mounting bracket and the robot wrist,
Petter Johansson . . . . . . . . . . . . . . . . . . . . . . . . . . 53
C.1 The four planes intersecting, Petter Johansson . . . . . . . . . . 71
C.2 The four planes intersecting, Petter Johansson . . . . . . . . . . 72
C.3 View from the x-y plane, Petter Johansson . . . . . . . . . . . . 73
C.4 View from the x-z plane, Petter Johansson . . . . . . . . . . . . 74
C.5 View from the y-z plane, Petter Johansson . . . . . . . . . . . . 75
44 List of Figures
Appendix A
Simulation models
% calculate intersection
% a + u*b + w*c = d + t*e
k = inv([b,c,-e])*(d-a);
45
46 Appendix A. Simulation models
White?
function out = fcn(box,x,y)
% input
sx = box(1); % size in x direction (mm)
sy = box(2); % size in y direction (mm)
% rotate
wrist_old
= wrist_old * rotx(rot(1)) * roty(rot(2)) * rotz(rot(3));
% translate
wrist_old = wrist_old * [eye(3),trans;0,0,0,1];
% output
wrist_new = wrist_old;
48 Appendix A. Simulation models
switch state_old
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% rotate to horizontal line (state = 0) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 0
% when white
if(In)
translate = [0;0;0];
rotate = [0;-pi/(2*1800);pi/(2*1800)];
% when black
else
translate = [0;0;0];
rotate = [0;0;0];
end
% change state
if(In)
state_new = 0;
else
state_new = 1;
end
count_new = 0;
50 Appendix A. Simulation models
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% follow horizontal line (state = 1) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 1
% when white
if(In)
translate = [0;0;0];
rotate = [0;pi/(2*1800);pi/(2*1800)];
count_new = count_old + 1;
% when black
else
translate = [0;0;0];
rotate = [0;pi/(2*1800);-pi/(2*1800)];
count_new = 0;
end
% change state
if(count_old > count_max)
state_new = 2;
count_new = 0;
else
state_new = 1;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% rotate to vertical line (state = 2) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 2
% when white
if(In)
translate = [0;0;0];
rotate = [0;-pi/(2*1800);0];
% when black
else
translate = [0;0;0];
rotate = [0;0;0];
end
% change state
if(In)
state_new = 2;
else
state_new = 3;
end
count_new = 0;
A.3. Controller model 51
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% follow vertical line (state = 3) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 3
% when white
if(In)
translate = [0;0;0];
rotate = [0;-pi/(2*1800);-pi/(2*1800)];
count_new = count_old + 1;
% when black
else
translate = [0;0;0];
rotate = [0;pi/(2*1800);-pi/(2*1800)];
count_new = 0;
end
% change state
if(count_old > count_max && pos_old < pos_max)
state_new = 4;
count_new = 0;
elseif(count_old > count_max && pos_old >= pos_max)
state_new = 10; % STOP
else
state_new = 3;
end
52 Appendix A. Simulation models
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% translate to horizontal line (state = 4) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 4
% when white
if(In)
translate = [0;1;0.5];
rotate = [0;0;0];
% when black
else
translate = [0;0;0];
rotate = [0;0;0];
end
% change state
if(In)
state_new = 4;
else
state_new = 1;
pos_old = pos_old + 1;
end
count_new = 0;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% in the end (state = x) %
% => don't change anything %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
otherwise
state_new = state_old;
count_new = count_old;
rotate = [0;0;0];
translate = [0;0;0];
end
Implementation
45
8
5
2x M4
90
70
R7
5
R2
4x 8
konkurrenter eller eljest obehöriga personer.
Denna ritning får icke utan vårt medgivande
kopieras, förevisas för eller utlåmnas till
xxx xxx
Pos Ant Artikel/Modell Benämning Material Dimension
Konstr Ritad Revision Vikt (kg) Skala Format Blad.nr
xxx 1:1 A3 1( 1)
Artikel/Modell Datum
Figure B.1: The adapter between the mounting bracket and the robot wrist
53
54 Appendix B. Implementation
MODULE lasercb
!!!!!!!!!!!!!!!!!!!!!!!!
! variables in main: !
!----------------------!
! beam !
! !
! variables in cal: !
!----------------------!
VAR num rot_step;
VAR num trans_step;
VAR speeddata go_speed;
VAR speeddata trace_speed;
VAR num count_max;
VAR num pos_max;
VAR robtarget p1;
VAR num state;
VAR num pos;
VAR num count;
VAR intnum black_white;
! variables in saver !
!----------------------!
VAR iodev logfile;
VAR robtarget curr_pos;
VAR robtarget psave;
!!!!!!!!!!!!!!!!!!!!!!!!
! transformation from wrist to beam.
! Must be calibrated in some way. This is just a start:
PERS tooldata
beam:=[TRUE,[[0,0,165],[1,0,0,0]],[0.5,[0,0,5],[1,0,0,0],0,0,0]];
B.2. Rapid code 55
!%%%%%%%%%%%%%%%%
!% Main program %
!%%%%%%%%%%%%%%%%
PROC main()
curr_pos:= CRobT(\Tool:=beam\WObj:=wobj0);
! "find " current position
cal (RelTool(curr_pos,200,0,0\Ry:=-90));
ENDPROC
56 Appendix B. Implementation
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% Calibration routine, pin is the target position %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
PROC cal(
robtarget pin)
!%%%%%%%%%%%%
!% Settings %
!%%%%%%%%%%%%
! rotational stepsize (degrees)
rot_step:=0.05;
! translational stepsize (mm)
trans_step:=0.2;
! speed when goto next line
go_speed:=v10;
! speed when following line
trace_speed:=v10;
!when to stop run away
count_max:=100;
! = number of data sets - 1
pos_max:=1;
!%%%%%%%%
!% Init %
!%%%%%%%%
! showposition(pin)
p1:=RelTool(pin,0,0,200\Ry:=90);
MoveL p1,go_speed,fine,beam;
! wait for 2 seconds in the showposition
WaitTime\InPos,2;
! startposition(pin)
p1:=RelTool(pin,0,-25,200\Ry:=90);
MoveL p1,go_speed,fine,beam;
! start in state 0
state:=0;
pos:=0;
count:=0;
! open logfile
Open "HOME:"\File:="LOGFILE1.DOC",logfile\Write;
! init interrupt
CONNECT black_white WITH saver;
ISignalDI digIn1,edge,black_white;
B.2. Rapid code 57
WHILE TRUE DO
TEST state
CASE 0:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% rotate to horizontal line (state = 0) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=-rot_step\Rz:=rot_step);
!translate = [0;0;0];
!rotate = [0;-pi/(2*1800);pi/(2*1800)];
! when black
ELSE
p1:=p1;
!translate = [0;0;0];
!rotate = [0;0;0];
ENDIF
MoveL p1,go_speed,fine,beam;
! change state
IF digIn1=1 THEN
state:=0;
ELSE
state:=1;
ENDIF
count:=0;
58 Appendix B. Implementation
CASE 1:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% follow horizontal line (state = 1) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=rot_step\Rz:=rot_step);
!translate = [0;0;0];
!rotate = [0;pi/(2*1800);pi/(2*1800)];
count:=count+1;
! when black
ELSE
p1:=RelTool(p1,0,0,0\Ry:=rot_step\Rz:=-rot_step);
!translate = [0;0;0];
!rotate = [0;pi/(2*1800);-pi/(2*1800)];
count:=0;
ENDIF
MoveL p1,trace_speed,fine,beam;
! change state
IF count>count_max THEN
state:=2;
count:=0;
ELSE
state:=1;
ENDIF
B.2. Rapid code 59
CASE 2:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% rotate to vertical line (state = 2) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=-rot_step);
!translate = [0;0;0];
!rotate = [0;-pi/(2*1800);0];
! when black
ELSE
p1:=p1;
!translate = [0;0;0];
!rotate = [0;0;0];
ENDIF
MoveL p1,go_speed,fine,beam;
! change state
IF digIn1=1 THEN
state:=2;
ELSE
state:=3;
ENDIF
count:=0;
60 Appendix B. Implementation
CASE 3:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% follow vertical line (state = 3) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=-rot_step\Rz:=-rot_step);
!translate = [0;0;0];
!rotate = [0;-pi/(2*1800);-pi/(2*1800)];
count:=count+1;
! when black
ELSE
p1:=RelTool(p1,0,0,0\Ry:=rot_step\Rz:=-rot_step);
!translate = [0;0;0];
!rotate = [0;pi/(2*1800);-pi/(2*1800)];
count:=0;
ENDIF
MoveL p1,trace_speed,fine,beam;
! change state
IF count>count_max AND pos<pos_max THEN
!if(count_old > count_max && pos_old < pos_max)
state:=4;
count:=0;
ELSEIF count>count_max AND pos>=pos_max THEN
!elseif(count_old > count_max && pos_old >= pos_max)
state:=10;
! STOP
ELSE
state:=3;
ENDIF
B.2. Rapid code 61
CASE 4:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% translate to horizontal line (state = 4) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,trans_step,0.5*trans_step);
!translate = [0;1;0];
!rotate = [0;0;0];
! when black
ELSE
p1:=p1;
!translate = [0;0;0];
!rotate = [0;0;0];
ENDIF
MoveL p1,go_speed,fine,beam;
! change state
IF digIn1=1 THEN
state:=4;
ELSE
state:=1;
pos:=pos+1;
ENDIF
count:=0;
DEFAULT:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% in the end (state = x) %
!% => shut down %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!shut down function
!disable interrupt
IDelete black_white;
!close logfile
Close logfile;
RETURN;
ENDTEST
ENDWHILE
ENDPROC
62 Appendix B. Implementation
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% Interupt routine, save orientation data to file %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
TRAP saver
! save transformation from world coordinate system (wobj0) to
! robot wrist coordinate system (tool0)
psave:=CRobT(\Tool:=tool0\WObj:=wobj0);
Write logfile," "\Num:=state\NoNewLine;
Write logfile," "\Num:=pos\NoNewLine;
Write logfile," "\Num:=digIn1\NoNewLine;
Write logfile," "\Num:=psave.trans.x\NoNewLine;
Write logfile," "\Num:=psave.trans.y\NoNewLine;
Write logfile," "\Num:=psave.trans.z\NoNewLine;
Write logfile," "\Num:=psave.rot.q1\NoNewLine;
Write logfile," "\Num:=psave.rot.q2\NoNewLine;
Write logfile," "\Num:=psave.rot.q3\NoNewLine;
Write logfile," "\Num:=psave.rot.q4;
ENDTRAP
ENDMODULE
B.3. Data processing 63
%% settings
%there are four outliers in this first run:
%logfile = importdata('LOGFILE1.DOC'); % import data from textfile
%% variable declarations
P = zeros(3,4); % known origo(s)
R1 = zeros(3,4); % known direction 1 of plane(s)
R2 = zeros(3,4); % known direction 2 of plane(s)
X0 = zeros(3,length(blackandorwhite));% unknown origo(s)
Rx = zeros(3,length(blackandorwhite));% unknown x direction(s)
Ry = zeros(3,length(blackandorwhite));% unknown y direction(s)
Z = zeros(3,length(blackandorwhite));% unknown z direction(s)
64 Appendix B. Implementation
% mean
P(:,m) = mean(log(:,7:9))';
R1(:,m) = mean(log(1:round(size(log,1)/2),4:6))';
R2(:,m) = mean(log(round(size(log,1)/2)+1:end,4:6))';
%% calculate X0
abcd = gen_plane(P,R1,R2); % det([P';R1';R2'])=0
abc = abcd(1:3,:)'; % =>
d = abcd(4,:)'; % ax + bx + cx + d = 0
X0(:,k+1) = abc\-d; % abc*X0 = -d
%% plot X0
scatter3(X0(1,k+1),X0(2,k+1),X0(3,k+1),'+');
%% m*X0 = b
mx = [n(:,1),n(:,2)];
my = [n(:,3),n(:,4)];
extract_vector.m
%
% Extracts vector data from a logfile
%
% y = extract_vector(file, Tsensor, state, pos, digIn1)
%
% 1. Extracts the rows from logfile that correspond to:
% state, pos, digIn1
%
% 2. Uses transformation matrix Tsensor to transform from
% wrist coordinates into tool coordinates
%
% 3. Returns a matrix n x 9 matrix containing
% state, pos, digIn, r11, r21, r31, px, py, pz
%
% Copyright (C) 2007 Petter Johansson
for i = 1:length(logfile)
if(logfile(i,1)==state && logfile(i,2)==pos && logfile(i,3)==digIn1)
% 1 convert from quaternion to homogeneous transform
tr = q2tr(logfile(i,7:10));
% recreate transformation matrix from robot base to robot wrist
Twrist = [tr(1:3,1:3),logfile(i,4:6)';0,0,0,1];
% return the data as state, pos, digIn, r11, r21, r31, px, py, pz
y = linedata(1:j,:);
B.3. Data processing 67
testfcn.m
%
% Plot line(s) starting in x in the m direction
%
% Copyright (C) 2007 Petter Johansson
function y = testfcn(x,m,length,linesp)
f = zeros(2,1);
n = size(x);
for j=1:n(2)
for i=0:1
f(i+1,1:3) = x(:,j) + i*length*m(:,j);
end
plot3 (f(:,1),f(:,2),f(:,3), linesp); figure(gcf)
end
y = f;
gen_plane.m
%
% Find the plane(s) that pass through P(i)
% and is parallel to R1(i) and R2(i)
%
% det([P(i)';R1(i)';R2(i)']) = 0
% =>
% ax + bx + cx + d = 0
%
% Copyright (C) 2007 Petter Johansson
a = zeros(1,length(P));
b = zeros(1,length(P));
c = zeros(1,length(P));
d = zeros(1,length(P));
for i = 1:length(P)
a(i) = R1(2,i)*R2(3,i)-R2(2,i)*R1(3,i); % beta1*gamma2-beta2*gamma1
b(i) =-R1(1,i)*R2(3,i)+R2(1,i)*R1(3,i); %-alfa1*gamma2+alfa2*gamma1
c(i) = R1(1,i)*R2(2,i)-R2(1,i)*R1(2,i); % alfa1*beta2-alfa2*beta1
d(i) =-P(1,i)*a(i)-P(2,i)*b(i)-P(3,i)*c(i); %-x0*a-y0*b-z0*c
end
hessian.m
%
% Tranform the plane:
% ax + bx + cx + d = 0
% into Hessian normal form:
% n*x0=-p
%
% Copyright (C) 2007 Petter Johansson
function np = hessian(abcd)
a = abcd(1,:);
b = abcd(2,:);
c = abcd(3,:);
d = abcd(4,:);
nx = zeros(1,length(a));
ny = zeros(1,length(a));
nz = zeros(1,length(a));
p = zeros(1,length(a));
for i = 1:length(a)
nx(i) = a(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
ny(i) = b(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
nz(i) = c(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
p(i) = d(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
end
np = [nx;ny;nz;p];
B.3. Data processing 69
q2tr.m
% Q2TR Convert unit-quaternion to homogeneous transform
% T = q2tr(Q)
% Return the rotational homogeneous transform corresponding
% to the unit quaternion Q.
% Copyright (C) 1993 Peter Corke
function t = q2tr(q)
q = double(q);
s = q(1);
x = q(2);
y = q(3);
z = q(4);
Results
C.1 3D plots
900
850
800
750
700
650
600
550 1050
1000
720 710 700 690 680 670 660 650 950
640
71
72 Appendix C. Results
1000
800
600
950
960
970
720
980
710
990 700
690
1000
680
1010 670
660
1020
650
1030 640
720
710
700
690
680
670
660
650
640
950 960 970 980 990 1000 1010 1020 1030
900
850
800
750
700
650
600
550
950 960 970 980 990 1000 1010 1020 1030
900
850
800
750
700
650
600
550
640 650 660 670 680 690 700 710 720