Anda di halaman 1dari 81

Robot Cell Calibration

using a laser pointer


SME Robot

Petter Johansson

Version n.o. 0.1

M.Sc. thesis in Machine Construction

Department of Machine Design


Department of
Machine Design
Lund University
Box 118
SE-221 00 LUND
SWEDEN

http://www.design.lth.se

c Petter Johansson, 2007


ii
Abstract

The purpose of the project was to design a robot cell calibration method built
on simple laser sensing methods. The main reason for this was to make it
easier for small and medium sized enterprises to use robots when producing in
short series.
Dierent working principles for the laser and the sensor were explored and a
laser-sensor device similar to a bar-code reader, were the intensity of reected
light is measured, was chosen. The laser beam tracks a xed black square
printed on a normal paper from dierent directions to obtain the unknown
coordinate system by calculating the intersection of the dierent beam paths.
A simulation model was built in the Matlab environment Simulink, where
a controller program was developed and tested. The controller program was
then ported to a robotic programing language and successfully run on a real
industrial robot.
Matlab routines for analysing the measured data were also implemented.
Contents

1 Introduction 1
1.1 Purpose of project . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Background 2
2.1 Accuracy and repeatability . . . . . . . . . . . . . . . . . . . . 2
2.2 Oine programming . . . . . . . . . . . . . . . . . . . . . . . . 2
2.3 Objective of calibration . . . . . . . . . . . . . . . . . . . . . . 2
2.4 Previous methods . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Laser and Transducers 6
3.1 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Laser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Photodiode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4 Concepts 10
4.1 Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Proposals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5 Kinematics 16
5.1 Frame description . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6 Simulation 18
6.1 Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.2 Sensor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.3 Robot model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.4 Controller model . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.5 Saver model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.6 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7 Implementation 28
7.1 Laser and sensor . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.2 Paper pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.4 Data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

iv
8 Results 36
8.1 Search pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.2 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
9 Conclusions 38
9.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
9.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
9.3 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
9.4 Future development . . . . . . . . . . . . . . . . . . . . . . . . 39
A Simulation models 45
A.1 Sensor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
A.2 Robot model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
A.3 Controller model . . . . . . . . . . . . . . . . . . . . . . . . . . 49
B Implementation 53
B.1 Adapter drawing . . . . . . . . . . . . . . . . . . . . . . . . . . 53
B.2 Rapid code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
B.3 Data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
C Results 71
C.1 3D plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

v
Chapter 1

Introduction

1.1 Purpose of project


/../More than 228 000 manufacturing SMEs in the EU are a
crucial factor in Europe.s competitiveness, wealth creation, quality
of life and employment. To enable the EU to become the most
competitive region in the world, the Commission has emphasized
research eorts aimed at strengthening knowledge-based manufac-
turing in SMEs as agreed at the Lisbon Summit and as pointed out
at MANUFUTURE-2003. However, existing automation technolo-
gies have been developed for capital-intensive large-volume man-
ufacturing, resulting in costly and complex systems, which typi-
cally cannot be used in an SME context. Therefore, manufacturing
SMEs are today caught in an .automation trap.: they must either
opt for current and inappropriate automation solutions or compete
on the basis of lowest wages. A new paradigm of aordable and
exible robot automation technology, which meets the requirements
of SMEs, is called for./../

[SMErobot, 2006]

1.2 The task


The task of this masters thesis is to develop a simple and low cost robot cell
calibration method using a normal laser pointer and a sensor of low complexity
in order to simplify the use of robots in short series production.

1
Chapter 2

Background

2.1 Accuracy and repeatability


The terms accuracy and repeatability are often confused. Repeatability is in
this case the ability for the robot to return to the same pose over and over. This
is necessary if the robot should work consistently over the cycles. Manipulator
accuracy is the precision at which a specied Cartesian pose is reached [Sey-
farth, 2006].

2.2 Oine programming


From 1963, when robotics began, until the beginning of the 1980s robots were
mostly programmed by teach in methods. At rst the robots were moved in
position by hand. During the years dierent programming methods developed
for dierent applications. For spotwelding and material handling teachpendants
were introduced. Other applications like spraypainting made use of physical
robot models called syntaxers [Quinet, 1995]. When the programming was
made with an operator as position feedback there where no need for calibration
since a robot that was teached by showing only had to playback what it was
shown, i.e the repeatabilty had to be good [Seyfarth, 2006].
In recent years dierent 3D simulation enviroments have been developed
where entire robotic cells can be designed, programmed and tested oine [Ul-
traArc, 2006; Visual components, 2006; ABB, 2006; Motoman, 2006]. This
provides a very convenient way of working, reducing downtimes and making
more complex programs possible. But there is one hatch, the simulations are
only models of the reality. Deviations between the models and the real world
will make the programs useless if they are not adjusted, i.e if the accuracy is
not increased by calibration [Quinet, 1995].

2.3 Objective of calibration


Calibration of a robot system can be divided into separate parts.
Signature calibration is calibration of the kinematic robot model, the rela-
tion between the real joints and the joint transducer signals, non-kinematic
parameters as bending and temperature deviation etc. This is in general the

2
2.3. Ob jective of calibration 3

responsibility of the robot manufacturer and is thus not treated in this project.
Calibration of the robot TCP(tool center point) is very important since it
has direct impact on the accuracy. The calibration has to be redone for every
new tool or when the tool for some reason is damaged. ABB has an automatic
solution called Bullseye [ABB, 2006] that measure the TCP of a welding gun
with a light source/transducer. TCP calibration is however not treated in this
project due to delimitations.
Calibration of static relations between objects in the robot cell, such as robot
- cell origin, robot - workpiece/xture, robot - robot or robot - machine, is
important if the workcell is exible and cell setups often change due to change
in production. This is the main focus of this project.

Position error
In a simulation layout a xture might be placed at X = 5, Y = 5. But it turns
out that in the real layout the actual xture is placed at X = 6, Y = 6. The
rst solution to this problem one thinks of is to measure the correct position,
update the oine model and download a new program to the robot controller.
This adjustment might work in a one robot cell.
Assume instead that the cell contain two robots and one xture. Viewing
the rst robot as the world origin, the second robot is placed in the wrong
position compared to the oine model. If the position of the second robot
is updated according to the measurements, the relations between the second
robot and the xture has to be updated (see gure 2.1) [Seyfarth, 2006].

Robot 2 Robot 2
Model Actual
0,2,0 R2F1 0.5,2,0
Actual
R2F1 R2F1
Model Model

Fixture
Fixture
Actual
Modell
0,1,0

R1F1
Actual

Robot 1
World Model
0,0,0 0,0,0

Figure 2.1: Position error

By dening a local coordinate system on the xture in which the object to


be worked on is dened, each robot just have to nd the coordinate system of
the xture to nd the object to work with. If the robot individually measures
the relation between it self and the xture, can the points on the workpiece be
calculated without additional touchups.
4 Chapter 2. Background

2.4 Previous methods


As indicated in section 2.2 the issue of cell calibration has been important since
the introduction of online programming.

Manual teach in
The most common way of calibrating a workcell has been to use the robot as
measuring device and an operator as director with feedback. This is similar to
the teach in methods used for oine programming and thus very timeconsum-
ing [Seyfarth, 2006].

External measurement equipment


Another established metod is using a Laser Tracker. Leica Geosystems has
one product where a laserbeam tracks a portable reector (see gure 2.2). The
reector can be mounted both on a moving robot or on an xed object in the
robotcell. The tracker measures the direction to the reector by meausuring
the angle of a rotating mirror. The relative in distance is measured with an
interferometer [Leica Geosystems, 2007].

Figure 2.2: Laser tracker


2.4. Previous methods 5

Robot mounted laser-based triangulation


A method suggested [Bernardi et al., 2001] make use of a laser triangulation
device (see gure 2.3). By measuring the reection angle at a position a few
centimeters from the laser the distance to an object can be calculated.
Laser

CCD/PSD - Sensor

∆z

Lense

∆Ζ
Object

Figure 2.3: Laser triangulation sensor

Placing the device as the robot tool and measuring dierent points, this tool
is used to identify the position of rectangular plates that specify the dierent
frames of interest. The method is interesting in that the plates are cheap and
easily attached to any object, but the cost of a high precision triangulation
device might be prohibitive. A laser triangulating device from the Keyence
LK-series starts at 40000 SEK [Provicon, 2006].
Chapter 3

Laser and Transducers

3.1 Physics
Stimulated emission
Atoms has the ability to emit and to absorb radiation energy such as visible
light. When light is absorbed, the atom is excited into a higher energy state.
If the atom is left at this excited high energy state, it will sooner or later re-
emit the light and thus return to a lower state. Even though this spontaneous
emission can not be predicted for a single atom, it is possible to calculate the
mean lifetime of the state.
Stimulated emission occur when the excited atom is exposed to radiation of
a specic frequency. If the radiation has the same frequency as the one the
atom is about to emit, the radiation will be emitted earlier. The emitted radi-
ation will have the same direction, phase and polarisation as the stimulating
radiation. This way the radiation beam can be amplied. But in equilibrium,
almost all atoms are at the lower energy level and thus will the photons be
absorbed and the radiation damped.
In order for the amplication to occur there has to be an inverted population,
i.e there has to be more atoms in the excited state than in the low energy state
[Jönsson, 2002]

Semiconductors
Semiconductors are materials that can be both conducting and insulating de-
pending on some conditions. Only electrons with an energy level above the
Fermi energy Ef may be acted on and accelerated in the presence of an elec-
tric eld.
In a solid material the electrons of the atoms are acted on by the electrons
and nuclei of adjacent atoms in such a way that the available energy states are
split into electron energy bands (see gure 3.1). Between the energy bands there
may exist energy band gaps. Energy levels within these gaps are not available
for the electrons that participate in the covalent bonds. In a semiconducting
material such as silicon there exists such a gap between the lled valence band
and the empty conduction band. The Fermi energy of pure silicon lies near
the centre of this band gap. In order for an electron to be excited into the

6
3.1. Physics 7

conduction band it has to absorb the corresponding band gap energy Eg .


free-electron energy

unfilled bands

conduction band

band gap
Energy
valence band

filled bands

Figure 3.1: Electron energy band structure

The characteristics of most commercial semiconductors are determined by


the impurities that are introduced by an alloying process termed doping. The
doping is separated into dierent layers depending on the application.

n-layer
In the n-layer impurities with one extra valence electron are added. If for ex-
ample phosphorus with ve valence electrons is added into a matrix of silicon
with four valence electrons, the extra electron will not participate in the co-
valent bond. Instead there exists one energy state for that electron within the
energy band gap close to the conduction band called the donor state. Here is
the Fermi energy also closer to the conduction band. But the thermal energy
available at room temperature is sucient to excite the electron from the donor
state into the conducting band thus creating an excess of free electrons in the
n-layer without leaving a hole in the valence band.

p-layer
The p-layer works the opposite way. Here impurities with one valence electron
less are added. For an example boron with three valence electrons could be
added into the silicon matrix. This kind of impurity atom introduce an energy
level just above the valence band within the band gap called an acceptor state.
The thermal energy present at room temperature will excite the electron from
the valence band into the acceptor state, thus creating an excess of free holes
in the valence band without leaving any new electrons in the conduction band.
By joining n-doped and p-doped layers in dierent ways components such
as diodes, transistors and microchips can be made. [Callister, 2005]
8 Chapter 3. Laser and Transducers

3.2 Laser
The Laser, Light Amplication by Stimulated Emission of Radiation, emits
photons in a coherent beam. It consists of an active medium, placed in an
optical cavity i.e. between two mirrors. An inverted energy population is
created by pumping energy into the medium. The spontaneously emitted light
is then reected by the mirrors and amplied by the medium thus creating
a standing wave. By having one of the mirrors partially transparent a laser
beam is formed.

The rst
The ruby laser (see gure 3.2), the rst working laser type, was invented by
Theodore Maiman in 1960. The active medium consisted of the gemstone ruby
which is chromium embedded in a matrix of aluminium oxide. Both ends of
the ruby where polished and silvered, energy was pumped into it by a spiral
formed ashing lamp.

Figure 3.2: Diagram of the rst ruby laser

Since then other lasers such as YAG, gas and semiconductor lasers has been
developed.

The semiconductor laser


When electrons and holes are recombined in the depletion zone at the junction
between the n-layer and the p-layer of a semiconductor the extra energy from
3.3. Photodiode 9

the electrons is emitted as light. While current is kept low this component is
known as a LED, light emitting diode. Increasing the current over a dened
limit creates an inverted population since the electrons are injected faster than
the holes can receive them. By splitting and polishing the semiconducting
crystal, it becomes an optical cavity where the light is amplied, thus creating
a standing wave. The light emitted is coherent, but due to diraction it is
quite divergent and therefore needs lenses to form a round straight beam.
The semiconductor laser has been developed to supply the needs of optic
bre communication and optic information storage such as the CD and the
DVD. This development has lead to small and inexpensive lasers that are now
commonly available. [Jönsson, 2002]

3.3 Photodiode
A photodiode can be used to reconvert a light signal into an electric signal.
Similar to a laserdiode, a photodiode consists of an p-n junction semiconductor.
When light hits a photodiode and the energy absorbed is greater than the band
gap energy Eg , electrons from the valence band in both the n-layer, the p-
layer and the depletion layer in between are exited into the conduction band,
leaving holes in the valence band. Due to the electric eld in the depletion
layer, electrons are accelerated towards the n-layer and holes towards the p-
layer thus building up a positive charge in the p-layer and a negative charge in
the n-layer. With an external circuit connected, a current will ow from the
p-layer to the n-layer.
The spectral response and the frequency response of the photodiode are con-
trolled by the thickness of the layers and the doping concentrations. [Hama-
matsu, 2006]
Chapter 4

Concepts

4.1 Intersection
Two lines
If the sensor system is able to locate the direction from a known point to the
point that is to be identied, both can be viewed as being on a known line.
Two lines can either be parallel, skew or intersecting in one point [Sparr, 1994].
If directional measurements can be made from two known points, the location
of the unknown can be calculated (see gure 4.1).

Figure 4.1: Two lines intersecting in one point

Three planes
A sensor system identifying a plane where the unknown point is located can
in a similar way, as with the line detector, be used to identify the location of a
point. Two planes can either be parallel or intersecting in one line. If a third
plane is not parallel to any of the other two non parallel planes, all three planes
will intersect in one point [Sparr, 1994]. If this kind of plane measurements
can be made from three known points, the location of the unknown can be
calculated (see gure 4.2).

10
4.2. Proposals 11

Figure 4.2: Three planes intersecting in one point

Transducer mounting
Mounting a laser beam in an exact intended direction with reference to the
wrist of a robot might be a challenging task. The problems will occur since
even small directional deviations of the beam in one end will lead to large
distance deviations at the other end. A deviation of 1◦ will make the beam
deviate 5 mm at a point 300 mm from the origin (see equation 4.1).

300 · sin(1◦ ) = 5.24 (4.1)


It may seem like the calibration problem just moves from the robot cell to
the calibration of the laser transducer. But when the laser is well mounted it
will retain its position in relation to the robot wrist. Some extra measurements
on the same point in the robot cell will give more equation systems to solve and
thus make it possible to eliminate these constant relations. This way it might
be possible to use the laser without knowing the exact location and direction
of it. When the rst point is measured it will be easy to calculate and use the
sensor constants thus speeding up the subsequent measurements.

4.2 Proposals
Single photodiode
A single photodiode denes a point of interest. The wrist mounted laser points
at the photodiode and an acknowledge signal is generated when the beam hits
it. With beams coming from dierent directions, one single unambiguous point
12 Chapter 4. Concepts

might be dicult to dene since the light might refract dierently depending
on the incident angle.
By lowering the photodiode into a cone (see g 4.3), the small hole at the
tip of the cone could dene the point whereas the incident angle becomes less
important. Any light that enters the cone, enters through this point and is
detected by the photodiode that constitute the oor of the cone. Three cones
are milled in one at piece of metal for each coordinate system, in order to
retain correct relations between the axis. Since the photodiode will determine
the total amount of light entering, the interference eects might not be a
problem.

Figure 4.3: Photodiode lowered into a cone

Even though this setting can tell that the beam points in the correct di-
rection, it might be dicult to nd the hole because the number of points in
space is large. Searching through all points might take a lot of time even if the
point is almost known and the space thus is limited.
If a lens that spreads the beam in a plane [ELFA, 2006] is attached to the
laser it will be easier to align the point with the plane, but as discussed in
section 4.1 more measurements must be made.

Photodiode array
The properties of semiconducting materials makes it possible to integrate sev-
eral photodiodes into arrays or matrices. The arrays are normally used in
spectrometers, linear encoders or for laser alignment in CD players. Also ma-
trices tailored for laser beam alignment are available [Hamamatsu, 2006]. A
similar matrix is the CCD element, that capture the image in a digital camera,
which contain more photodiode elements than the previous.
A matrix can be used as feedback, directing the beam in two dimensions into
the centre of the matrix. With a beam plane in combination with an array,
control in one dimension would be enough, letting the plane sweep over the
array more or less perpendicularly (see gure 4.4).
4.2. Proposals 13

Figure 4.4: Laser plane sweeping over an array

Photodiode matrices and arrays are sensitive to the rough environment in a


workshop and need protection. A glass window would be one option, but again
the problem of random refraction from dierent angles becomes apparent. The
sensors also has to be placed very exact.
Both the single photodiode, the array and the matrix are quite complex
solutions with three sensors for every coordinate system. The number of mea-
surements are six and nine for line systems and plane systems respectively, not
counting those necessary for calibration of the laser.

Coordinate system mounted lasers


Changing place of the laser and the transducer by mounting the transducer at
the robot wrist and having two lasers as the x and y directions of the coordinate
system is another option. The lasers points outwards from a common point in
the x and y directions respectively. The z direction is calculated as the vector
product of x and y. Two measurements are made per axis on dierent distances
from the origin and it is thus possible to determine both the directions and the
position of the origin.
Because of equation 4.1 the demands on the mounting of the laser diodes
are even higher than when the transducers are mounted at the target. Still the
problem of refraction can be solved with a cone dening the point measured.
Even though the method is fairly complex, the total number of measurements
are reduced to four per coordinate system, not counting those required to
calibrate the sensor.

Bar-code reader
Pen type bar-code readers consists of a light source and a photodiode placed
next to each other. The photodiode is selected such that it has the greatest
sensitivity at the same frequency as the light source emits. The photodiode
14 Chapter 4. Concepts

measures the intensity of the reected light as the pen is swept over the bar-
code (see gure 4.5). Black areas absorb most of the light whereas white areas
reect much of the light. The bar-code information is thus transformed into
a binary signal. Dierent widths of the lines make it possible to encode quite
much information. Drawbacks with the pen reader is that it has to be swept
with constant speed in close proximity of the bar-code.

Figure 4.5: A EAN-13 barcode

Laser scanners use in a similar way a laser beam as the light source. Here
the user does not have to sweep the scanner since the beam is directed back
and fourth with a reciprocating mirror or a rotating prism [TALtech, 2006].
The well directed light beam from the laser also makes it possible to read
barcodes from a distance. A few beam paths in dierent directions gives the
user freedom to scan the barcode of an object without having to care about
the direction of the scanner. [Netto, 2006]
A hybrid between the pen reader and the laser scanner, with a xed laser
beam mounted on a robot, could be used to identify some pattern printed on a
paper leading to the identication of the unknown coordinate system. Ideally
the robot will follow a line or a curve on the paper to a point (e.g. a corner)
while the coordinates and the directions of the robot wrist will be saved for
later calculations. This will be repeated a sucient amount of times from
dierent directions to gain full information about the coordinate system. This
method will probably be able to trace the coordinate systems from an arbitrary
distance and thus speeding up the measurements.

4.3 Selection
Criterias
To select one of the proposed methods some evaluation criterias where set up.

1. total complexity of the solution


2. complexity of wrist components
3. complexity of coordinate system components
4. estimated measurement time

Evaluation
The proposed laser and sensor combinations where graded on a scale from 1 to 5
where 5 represented the most desirable properties of that criterion. The grades
4.3. Selection 15

from the dierent criterions where summed without weighting and compared
to the average of all the sums.
Coordinate system Robot wrist 1 2 3 4 Sum
single photodiode in a cone laser beam 3 5 4 2 14
single photodiode in a cone laser plane 3 3 4 3 13
photodiode array laser plane 3 3 3 3 12
photodiode matrix laser beam 2 5 2 3 12
laser beam single photodiode in a cone 2 3 2 4 11
laser beam photodiode matrix 2 2 2 4 10
paper with printed patterns xed laser bar-code reader 5 4 5 4 18
Average: 12.9

Conclusion
By analysing the evaluation table, one draws the conclusion that the bar-code
reader solution would be the best to proceed with.
Chapter 5

Kinematics

5.1 Frame description


Position vector
A position in space, related to a known frame of reference A, can be described
with a 3 × 1 position vector.
 
px
A p =  py  (5.1)
pz

The position vector can also be used to describe translations from one position
to the other.

Rotation matrix
A robot with six degrees of freedom can be oriented in any direction. There-
fore it is necessary in most applications to describe not only positions but also
orientations. A new frame of reference B is attached to the body whose orien-
tation is to be described. This new frame B is described as unit vectors A XB
A YB and A ZB in frame A. As these three vectors are joined they form the
rotation matrix.
 
r11 r12 r13
B
(5.2)
 
AR = A XB A YB A ZB =  r21 r22 r23 
r31 r32 r33

Sometimes when the rotation matrix B A R is known, it could be interesting to


describe the rotation of frame A in relation to frame B . This is done by
inverting the B
A R. But since A R is orthogonal, this is the same as transposing
B
B R. [Sparr, 1994]
A
A B
B R =A R
−1
=BA R
T
(5.3)

Transformation matrix
If a position is known in frame B as vector B p it is possible to describe it
in frame A by multiplying the vector with the rotation matrix B A R and then

16
5.1. Frame description 17

adding the distance from frame A to frame B i.e. A pBorigo .

Ap =B
A RB p +A pBorigo (5.4)
This can be written more conveniently by rewriting the expression as
 
BR
   
Ap A pBorigo Bp
(5.5)
 
= A 
1   1
0 0 0 1

and then introducing the homogenous transformation matrix B


AT .

Ap =B
A TB p (5.6)

Combined transformations
If the transformation matrix CB T is known, a position C p in frame C can be
written in reference to frame B

Bp =C
B TC p (5.7)
By combining equation (5.6) and equation (5.7), C p can be described directly
in reference to frame A.
B C
A p =A TB TC p (5.8)
This means that the compound matrix CA T can be written as the kinematic
chain 5.9.
C B C
A T =A TB T (5.9)

Inverted transformation
By using the rules dened in equation (5.3) it is easy to invert the transforma-
tion matrix
 
B RT −B T
A
=B −1 AR A pBorigo
(5.10)
 
A
BT A T = 
 
0 0 0 1
[Olsson et al., 2005]
Chapter 6

Simulation

6.1 Simulink
Simulink is a toolbox within Matlab that is capable of modelling, simulating
and analysing dynamic systems. The systems can be both linear and nonlinear.
It is possible to work both by connecting ready made building blocks or writ-
ing own function blocks in dierent programming languages such as matlab.M,
C or C++ [MathWorks Inc., 2006].

6.2 Sensor model

(100 100)
black field [x,y]

2
Robot base

box
3
Mounting plate T1 x out
x fcn 3
T2
fcn Out
T3
T4 y y
4
intersection in plane white?
Sensor
1
1 x
Plane
2
y

Figure 6.1: Model of how the laserbeam interact with the plane

In the model world all transformation matrices are known. The plane input
(Input 1, gure 6.1) is the transformation matrix from the world origin to the

18
6.3. Robot model 19

plane where the unknown coordinate system is located. The x and y direction
of the transformation matrix together with the translational part span the
plane (see equation 6.1).
 
r114 r124 . . . px4
 r214 r224 . . . py4 
T4 =  
 r314 r324 . . . pz4  (6.1)
0 0 0 1

The other inputs (in gure 6.1) describe the kinematic chain from the world
origin to the laser sensor. As the laser beam coincide with the x direction of the
sensor coordinate system, the [r11; r21; r31] vector of the rotational matrix of
the chain together with the translational part denes a line (see equation 6.2).
 
r113 . . . . . . px3
 r213 . . . . . . py3 
T1 · T2 · T3 =  
 r313 . . . . . . pz3  (6.2)
0 0 0 1

When the laser beam points at the plane, the line and the plane intersect in one
point (see equation 6.3). The rst block in gure 6.1 contains a Matlab function
(see section A.1) that solves equation 6.3 for u, w and t. The unknown u and
w are the x respectively the y position where the line intersect with respect to
the plane origin.
         
px4 r114 r124 px3 r113
 py4  + u ·  r214  + w ·  r224  =  py3  + t ·  r213  (6.3)
pz4 r314 r324 pz3 r313

The next block utilise this x, y position to determine if the beam hits the plane
outside or inside the black square dened by (0, 0) and the box constant. The
output of the function is a boolean that is false if the beam is absorbed by a
black area and true if the beam is reected by a white area.
The x and y are in the reality unknown for the robot but they are made
outputs from the model for analysing purpose only.

6.3 Robot model


Since the application should be independent of the robot type, the internal
controller of the robot is assumed to take care of the axis control. The robot is
told to rotate and translate its tool center point (TCP) relative to the previous
pose. The robot model is thus extremely simplied. The rotate input (see
gure 6.2), is a vector dening how much to rotate TCP around x, y, z axis in
radians. The translate input denes in a similar way how much to translate
the TCP along the x, y, z axis in millimetres. The robot delay (see gure 6.2)
keeps track of the previous pose as a transformation matrix and the move robot
function (see section A.2) transforms that matrix into the new pose according
to the rotate and translate commands. It is also in the robot delay that the
initial robot pose is set.
20 Chapter 6. Simulation

rotate around x,y,z (rad)

1 rot
Rotate wrist_new
trans fcn 1
Wrist
translate along x,y,z (mm) wrist_old
move robot
2
Translate

z
Robot Delay

the initial condition in this delay defines initial wrist position and orientation

Figure 6.2: Model of how the robot react on rotate and translate commands

By translating the transformation matrix of the sensor in gure 6.1 a few


millimetres in the z direction of the wrist it is possible to simulate what happens
if the robot is controlled in relation to the well dened wrist instead of around
an approximated point in the beam.

6.4 Controller model


When feeding the sensor output signal to the robot inputs and thus creating
a feedback loop there has to be a controller that converts the sensor signal to
robot instructions. Since the task locating a corner of a square with a binary
sensor is a nonlinear control problem, putting a normal PID controller in the
loop would not do the job.

Finite state machine


A nite state machine is a way of representing a reactive system. The system
make transitions between dierent discrete states depending on internal and
external conditions [MathWorks Inc., 2006].
The only source of external conditions is in this case the output of the sensor.
The internal conditions are represented by two counters. The counter count
keeps track of the number of steps since the last time the sensor detected the
black area. The counter pos counts the number of data sets that have been
completed. For each data set the TCP stays in one point and the corner
of the square is located with the TCP rotating around its y and z axis. The
internal variables state, count and pos are saved for next time step in the delay
(see gure 6.3). The state machine is implemented according to gure 6.5 as
described below.
6.4. Controller model 21

1
translate
Translate
1 In
In count_max rotate
fcn 2
pos_max
old new Rotate
100 control logic
3
N.o of steps State
until state switch
1
1
z
N.o of translations delay state, count, pos

Figure 6.3: Model of the controller program

State 0
In state 0 it is assumed that the measurements start with the beam pointing at
a spot somewhere below the corner of the black square. In the reality this will
be guaranteed by starting the search pattern below the assumed coordinate
system.
From here the TCP rotates stepwise in negative direction around the y axis
and in positive direction around the z axis until it reaches the black area where
it switches to state 1 (see gure 6.4)

State 1
In state 1, in the black area, the TCP rotates stepwise in positive y and in
negative z directions until it reaches the white area. In the white area it
rotates in both positive y and z directions until it reaches the black area again.
This way it continues until the sensor has been pointing at the white area for
count-max number of steps when it switches to state 2.

State 2
State 2 is similar to state 0 in that the laser beam starts far from the black
area. The beam moves by rotating the y axis in negative direction until it hits
the black area again and then it switches to state 3.
22 Chapter 6. Simulation

Figure 6.4: Beam path over the plane

State 3
State 3 is similar to state 1, but instead of following the horizontal line the
beam follows the vertical line. In the black area, there is positive rotation
around y and negative around z . In the white area the rotation around y
is negative instead. As in state 1 the beam runs away until count switch to
state 4.

State 4
State 4 is similar to both state 0 and state 2. Instead of rotating back to the
black area the TCP translates stepwise in positive y and in positive z at half
the step size compared to the y direction.
When the beam reaches the black area again the pos counter is increased
and the state machine restarts from state 1. But the second time the program
ends after state 3 due to the pos counter.

6.5 Saver model


The Saver block is triggered by the sensor signal both on positive and negative
ank. When the block is triggered it saves the current robot pose and the
inner states of the controller. The pose information is later used to calculate
the unknown coordinate system, whereas the controller states are saved to
make it easier to determine which pose data that belonged to which line.

6.6 Simulation
Connecting the dierent building blocks forming a feedback loop (see gure 6.6)
the controller completes the search pattern, following both a horizontal line and
a vertical line twice, in about 1300 time steps. In this model one time step
6.6. Simulation 23

symbolises the time it takes for the robot to move to the next set point. Fig-
ure 6.7 indicates how the beam sweeps over the plane in the x and y directions
respectively compared to time. Figure 6.7 also indicate how the sensor reacts
over time.
Rotating the black area is possible for angles up to at least ±π/6. The
corresponding search patterns are displayed in gure 6.8 and gure 6.9.
24 Chapter 6. Simulation

State = 0 In = 0

Black White

In = 1

In = 0

State = 1 In = 0

Black White

In = 1

count_old > count_max

State = 2 In = 0

Black White

In = 1

In = 0 In = 0

State = 3 In = 0
count_old > count_max
&&
pos_old >= pos_max

STOP Black White

In = 1

count_old > count_max && pos_old < pos_max

State = 4 In = 0

Black White

In = 1

Figure 6.5: State chart of the controller program


6.6. Simulation 25

1 0 0 0
0 1 0 0 0 0 1 0
0 0 1 0 0 1 0 -25
0 0 0 1 -1 0 0 200
0 0 0 1 T4
world Tin
plane fcn Tout robot base
z robot wrist
0 Tout = Tin * rotz(z)
Rotates the black field around Z. set as initial condition
z in Robot/Robot Delay
Tout is in the reality one of
THE unknowns

1 0 0 0
0 1 0 0
0 0 1 0 robotbase
0 0 0 1
world XY Graph
robotbase

Plane x
Rotate Robot base y
Wrist <robotbase> Mounting plate
Translate wrist <wrist> Sensor Out
<sensor>
Sensor x - position
Robot
y - position
sensor
1 0 0 0
0 1 0 0
0 0 1 0 sensor
0 0 0 1
robot wrist
sensor

Translate
1
Rotate In
z
State
Controller Delay sensor

ctrl_state

<robotbase> Matrix
T
<wrist> Multiply
<sensor>
Saver
Product

Figure 6.6: The main simulink model, connecting the dierent submodels
26 Chapter 6. Simulation

Figure 6.7: Simulation results of how x, y and sensor output varies with time

Figure 6.8: Search pattern when the black area is rotated π /6


6.6. Simulation 27

Figure 6.9: Search pattern when the black area is rotated −π /6


Chapter 7

Implementation

The realisation of the bar-code reader concept from section 4.2 needed four
things:
• a laser with integrated sensor

• a suitable pattern printed on a paper

• a robot loaded with a control program

• a routine that handle the measured data

7.1 Laser and sensor


As described in section 4.2 laser light was to be emitted from the end of the
robot arm and the reected light collected and measured from the same place.
A thin laser beam and a sensitive photo diode were desirable properties of the
device.

Make or buy?
While facing questions on how to optimise the optical, the mechanical and
the electrical properties of the device, research on the Internet disclosed three
similar devices from the same manufacturer.

Keyence LV Series
The LV Series from the Keyence corporation are sensors that can be used to
detect objects in a number of dierent situations. From this series there were
three dierent sensors that were suitable for this application [Provicon, 2006]:
• LV-H32 - adjustable beam spot (min Ø0.3 mm)

• LV-H35 - constant beam spot (Ø2 mm), coaxial laser and sensor

• LV-H37 - small spot (Ø50 µm), short range

28
7.1. Laser and sensor 29

From these three, LV-H32 was chosen since it seemed to be the most exible
and thus the best suitable for the experimental setup.
In a production setup the LV-H37 might be a better selection since the
smaller beam will cause better repeatability. But since the precise location
of the objects in the robot cell are unknown on beforehand, the sensor with
shorter range would increase the collision risk between robot/sensor and the
other objects.

Amplier
With the sensor an amplier came (LV-21AP). The amplier lters the anal-
ogous signal from the sensor and outputs a digital signal on the black cable
depending on some trigger level. There is a number of dierent modes and
settings that can be adjusted. The most convenient feature is the automatic
tuning. By pressing the set button while passing the optical axis back and
forth over the edge, the trigger level will be adjusted to the midpoint between
the maximum and minimum light intensity detected (see gure 7.1). This fea-
ture can also be controlled by grounding the pink cable, hence is it possible to
automate the sensor tuning. This possibility was however not implemented in
the experimental setup, since it is easy to tune the sensor manually by pressing
the button. It is possible to interrupt the laser radiation by short circuit the
purple cable with the brown power cable, but this was also not implemented
in the current setup. [Keyence, 2006]

Figure 7.1: Received light intensity

The price of the sensor and amplier was about 5500 SEK, which is about
15% of the price of the sensor described in section 2.4 [Provicon, 2006].

Mounting
Included with the sensor was also a general purpose mounting bracket (see
gure 7.2)
To be able to t the mounting bracket to the robot arm, an adaptor plate
was made in aluminium (see drawing in appendix B.1)
30 Chapter 7. Implementation

Figure 7.2: Included mounting bracket

7.2 Paper pattern


A black square (10x10 cm) was printed on a normal printer (see gure 7.3)
and the paper was attached to a at surface. One of the corners of the square
dened the origin of a coordinate system and the two sides closest to that
corner dened the x and y directions.
The square was made this big to simplify the initial measurements. In a
rened production setup, the square might be made smaller. When measuring
a robot cell, the squares are placed at the dierent coordinate frames of interest.

7.3 Robot
The sensor was mounted on a standard ABB IRB2400 robot. Control output A
from the sensor (black cable) was connected to the digital IO on the robot
controller called digIn1.

Rapid
Every robot manufacturer has developed at least one own programming lan-
guage, hence there exists several hundred dierent languages and dialects [Fre-
und et al., 2001].
The language developed for the ABB robots is called Rapid. The robot
program for the experiment was made by manually porting the control program
from the matlab simulation (see section 6.4) to Rapid without any major logical
changes.
In the robot program the search pattern starts relatively the current position
of the robot, to simplify the handling of the experiment. In a production
setup would this start position instead be programmed in some 3D simulation
7.4. Data processing 31

Figure 7.3: A black square

enviroment (see section 2.2), as it is the approximate location of the unknown


coordinate system.
The nal Rapid program (see appendix B.2) was loaded into the robot con-
troller with a standard FTP (le transfer protocol) client.

7.4 Data processing


While passing over the edges of the square, the controller saves the current
state, the identication number of the data set, the state of the sensor (black
or white) and the transformation from the robot base to the end of the wrist
to a textle.
The transformation is represented by a translational vector x, y, z and the
four quaternions q1, q2, q3, q4. Using Quaternions is a more compact way of
describing the rotational matrix [ABB Robotic Products, 1995].

Extracting vectors
A matlab function, extract_vector.m, was written that lters and transforms
the data from the test into vectors corresponding to the known beam paths
(see appendix B.3). The lter program works in three steps:
1. The rows from the logle are extracted depending on the state, pos and
32 Chapter 7. Implementation

digIn1. The quaternions are transformed into rotational matrices with


the q2tr function from the Robotics Toolbox for MATLAB [Corke, 1996]
and transformation matrices are formed.
2. The extracted transformation matrices are multiplied with the approxi-
mate transformation from the wrist to the laser beam to form new trans-
formation matrices from the robot base to the dierent positions of the
laser beam. This transformation from the wrist to the beam is the same
transformation that was used to rotate around in the experiment. This
was not saved in the logle since it is not the true value due to the
possible mounting errors described in section 4.1.
3. The directions and the origins of the dierent beam locations are ex-
tracted from the new transformation matrices the same way as in the
simulation (see equation 6.2). Those directional and positional vectors
are together with the state information returned as a vector with one
laser beam location on each row.

The intersections between the beams


When the positional and directional vectors from the measurements are known
they can be used to calculate the unknown coordinate systems. Ideally those
lines form four planes. The origin of the unknown coordinate system is then
calculated as the intersection between all the four planes and the directions
are calculated as the intersections between each two planes.

Mounting errors
The problem is however not that easy due to the laserbeam mounting errors
(see section 4.1). Instead the measurement data consist of lines that start at
given positions P , with the constant unknown mounting error dP , that points
in a given direction R, with the constant unknown error dR, at the unknown
but straight lines X0 + mx (i)Rx and X0 + mx (j)Ry (see g 7.4). This leads to
equation 7.1.
(P (i) + dP ) + t(i) (R(i) + dR) = X0 + m(i)Rx
(7.1)
(P (j) + dP ) + t(j) (R(j) + dR) = X0 + m(j)Ry

The two unknown lines, X0 + mx (i)Rx and X0 + mx (j)Ry are also known
to be perpendicular, which means that Rx ⊥ Ry . The dot product of two
perpendicular vectors is zero [Sparr, 1994]. Hence equation 7.2 pose more
constrains on the solution.
Rx · Ry = 0 (7.2)
Solving these equations states a nonlinear problem, since both m(i), Rx , m(j)
and Ry are unknown. One approach tried was using the nonlinear data-
tting method lsqnonlin in Matlab based on the Levenberg-Marquardt algo-
rithm [MathWorks Inc., 2006]. But this was not successful.
7.4. Data processing 33

(P(n) + dP) + t(n) *( R(n) + dR))

(P(3) + dP) + t(3) *( R(3) + dR))


i=n
(P(2) + dP) + t(2) *( R(2) + dR))

i=3
(P(1) + dP) + t(1) *( R(1) + dR)) i=2

i=1

X 0 + m(i)*R x

Figure 7.4: The unknown x direction

Generalised solution
Instead of nding the exact solution a more generalised approach was used not
compensating for the mounting errors. Each plane was divided into two sets of
lines (see gure 7.5). The plane was assumed to start in point X0 = (x0 , y0 , z0 )
taken as the mean value of P and the two non parallel vectors spanning the
plane, R1 = (α1 , β1 , γ1 ) and R2 = (α2 , β2 , γ2 ), was taken as the mean values
of the two sets directional vectors respectively.
900
X0

850

800

750

700

650

600

R1 R2
550
990 995 1000 1005 1010 1015 1020 1025 1030

Figure 7.5: The mean lines

The plane through (x0 , y0 , z0 ) that is parallel to (α1 , β1 , γ1 ) and (α2 , β2 , γ2 )


is given by equation 7.3.
 
x − x0 y − y0 z − z0
det  α1 β1 γ1  = 0 (7.3)
α2 β2 γ2
34 Chapter 7. Implementation

Computing the determinant gives the general equation of the plane (equa-
tion 7.4).
ax + by + cz + d = 0 (7.4)
where
a = β1 γ1 − β2 γ1
b = − α1 γ2 + α2 γ1
(7.5)
c = α1 β2 − α2 β1
d = − x0 a − y 0 b − z0 c
Computing equation 7.4 for each of the four measured planes gives an overde-
termined equation system (equation 7.6).
AX0 = −D (7.6)
where  
a1 b1 c1
 a2 b2 c2 
A=
 a3
 (7.7)
b3 c3 
a4 b4 c4
 
x0
X0 = y0 
 (7.8)
z0
 
d1
 d2 
D= 
 d3  (7.9)
d4
The location of the intersection between the four unknown planes is solved
with a least squares t. In Matlab that is done by typing
X0 = A\ − D (7.10)
The unit normal vector n = (nx , ny , nz ) for each plane is given by
a
nx = √
a2 +b2 +c2
ny = √ b
a2 +b2 +c2
(7.11)
c
nz = √
a2 +b2 +c2

and specifying the constant


d
p= √ (7.12)
a2 + b2 + c2
gives the Hessian normal form of the plane
nx = −p (7.13)
To nd the x direction of the unknown coordinate system, i.e. the intersec-
tion between plane 1 and 2, mx and bx are dened (equation 7.14 and 7.15)
mx = [n1 , n2 ] (7.14)
7.4. Data processing 35

 
p1
bx = − (7.15)
p2
then
mx X0 = bx (7.16)
gives the direction (Rx ) of the intersection (X0 +tRx ) as the negative nullspace
of mx .
Rx = −null(mx ) (7.17)
The y direction of the unknown coordinate system is then given in a similar
way as
Ry = −null(my ) (7.18)
where
my = [n3 , n4 ] (7.19)
[Weisstein, 2002]
In order for this method to be useful for nding X0 , Rx and Ry , it must be
assumed that the sensor mounting was calibrated on beforehand, even though
no methods for that where successfully developed during this project.

The transformation matrix


The transformation matrix from the robot base to the measured coordinate
system can be calculated when the origin X0 and the two perpendicular direc-
tions Rx and Ry are known.
Using the same name conversion as in the simulation, the transformation
from the robot base to the measured coordinate system is given by (compare
with equation 6.1)  
 R x Ry Z X 0 
T4 = 


 (7.20)
0 0 0 1
where
1
Z= (Rx × Ry ) (7.21)
|Rx × Ry |

Matlab implementation
The generalised solution was implemented in Matlab (see appendix B.3). In
the Matlab program, the coordinate system is rst calculated for the data
collected when the optical axis made the transition from white to black, then
for the data from the black to white transition. Both calculations are plotted
in the same 3D graph to simplify comparisons.
Chapter 8

Results

8.1 Search pattern


When running the experiment on the real robot, the beam followed a path that
was very similar to the path predicted by the simulation. The search algorithm
was fairly stable. If the robot did not nd the black square the internal step
counter stopped the execution after a while. The search routine was optimised
for a distance of about 200 mm between the laser and the printed target and
the trigger level of the sensor had to be set accordingly, since the level of
reected light vary with the distance.

8.2 Data analysis


Dealing with the measurement data, the computations were made twice. First
for the data corresponding to the laser moving from the white area to the black
area and secondly for the data corresponding to moving from the black area to
the white area. The 3D plots can be seen in gure 8.1 and in appendix C.1.
900

850

800

750

700

650

600

550
720

700

680
1030
660 1010 1020
1000
990
980
640 970
960
950

Figure 8.1: The four planes intersecting

36
8.2. Data analysis 37

Black  White distance


Measuring the distance between X0 black and X0 white gave
 
0.0027
X0 black − X0 white =  0.2125  [mm] (8.1)
0.6344

and the absolute dierence


|X0 black − X0 white | = norm (X0 black − X0 white ) = 0.6691[mm] (8.2)

Perpendicular directions?
Computing the dot product of the directional vectors for the black data gave
Rx · Ry = 0.0117 6= 0 (8.3)
and
Rx · Ry = 0.0026 6= 0 (8.4)
for the white data. Since none of the
Rx · Ry = 0 (8.5)
were the vectors not exactly perpendicular as they were supposed to.
It was seen from the 3D plots that the white and the black directions di-
verged, especially in the Ry direction.
Chapter 9

Conclusions

9.1 Simulation
Simulation is a powerful tool for a number o dierent problems, especially in
the eld of robotics control. Spending some time building models, might save
a lot of time in the implementation phase due to the possibility of "playing
around" with the models without the risk of destroying things.

9.2 Experiment
By this successful implementation, it has been shown that an accurate robot
in combination with some relatively low complex laser sensor system can be
used as a quite advanced measuring device.

9.3 Data analysis


Black  White
By looking at the results in section 8.2 one concludes that there might a dif-
ference in the measured result when using the data that were collected when
going from black to white and the data collected when going from white to
black. This dierence is probably due to the following factors
• The delay introduced in the saving interrupt routine. Even though the
robot is told to save the current axis positions, there might be a delay
when activating the interrupt routine causing the robot to save the wrong
axis values.
• The geometry of the beam spot. When not properly focused, the beam
from this semiconducting laser was not perfectly rounded.
• The not perfectly mounted laser beam, without compensation, might
have caused eects on the dierence since the optical axis does not pass
over the edge exactly in the same spot twice.

38
9.4. Future development 39

Not exactly perpendicular


Equation 8.3 and 8.4 gave that the directions of the coordinate systems where
not perfectly perpendicular as expected. This deviation might have been
caused by
• The not perfectly mounted, uncalibrated laser beam.

• A not perfectly at surface where the paper was put.

By looking at the plots it seems like the positional measurements are the
most reliable compared to the information about the directions. This is consis-
tent with section 4.1 where the small error angle makes the error grow far from
the centre. One option is thus to use the measured data to dene one point
(X0 ) and then calculate the directions as the directions to other measured
points (X1 and X2 ) all dened on a at surface.

9.4 Future development


What can be done to improve this method?
• Developing a simple but automatic sensor mounting calibration routine
would make the method more reliable and thus more useful. Either a
separate method for calibration or rather a calibration while measuring
is of interest.
• A smaller, better rounded beam spot measuring from a closer location
would increase the repeatability
• Optimising the search speed versus the accuracy. When speeding up the
measurements productivity rices, but there is a risk that the accuracy
lowers since the delay introduced in the interrupt routine makes the dif-
ference in the measurements between going from black to white and from
white to black grow.
Bibliography

ABB robotics webpage, www.abb.com/robotics, 2006


ABB Robotics Products, Product Manual IRB 2400, 1995
Markus Bernardi, Helmut Bley, Christina Franke, Uwe Seel, Institute for
Production Engineering, Saarland University, Process-based assembly
planning using a simulation system with cell calibration, IEEE, 2001
William D. Callister, Jr., Fundamentals of Materials Science and
Engineering, Wiley, 2005
P.I. Corke, A Robotics Toolbox for MATLAB, IEEE Robotics and
Automation Magazine p 24-32 volume 3, 1996
ELFA AB webpage, www.elfa.se, 2006
Eckhard Freund, Bernd Lüdemann-Ravit, Oliver Stern, Thorsten Koch,
Institute of Robotics Research (IRF), University of Dortmund, Creating the
Architexture of a Translator Framework for Robot Programming Languages,
IEEE, 2001
Hamamatsu Photonics K.K., Photodiode Technical Information,
www.hamamatsu.com, 2006
Göran Jönsson, Atomfysikens grunder, Teach Support, 2002
Keyence Corporation, General Purpose Digital Laser Sensor, LV Series,
Instruction Manual, 2006
Leica Geosystems webpage, www.leica-geosystems.com, 2007
MathWorks Inc., Matlab helples, 2006
Motoman Robotics Europe AB webpage, www.motoman.se, 2006
Netto Classensgade Copenhagen, experiments at a supermarket, 2006
Magnus Olsson, Mikael Fridenfalk, Per Cederberg, Department of Mechanical
Engineering, Lund University, Introduction to Robot Kinematics and
Dynamics, 2005
Provicon, mail contact with Fredrik Hallin, Sales Manager, Provicon, Beving
Compotech AB, 2006

40
Bibliography 41

J.F. Quinet, Krypton France Calibration for oine programming purpose and
its expectations, Industrial Robot, Vol. 22 No. 3, 1995, pp. 9-14

Markus Seyfarth, SMErobot project no. 011838, Report on state of the art
calibration methods, 2006

SMErobot, Project Overview, www.smerobot.org, 2006


Gunnar Sparr, Linjär algebra, Studentlitteratur, 1994
TAL Technologies Inc. webpage, www.taltech.com, 2006
Delmia webpage, www.delmia.com/gallery/pdf/DELMIA_UltraArc.pdf, 2006
Visual components Oy webpage, www.visualcomponents.com, 2006
Weisstein, Eric W., MathWorld  A Wolfram Web Resource,
mathworld.wolfram.com, 2002
List of Figures

2.1 Position error, Seyfarth [2006] . . . . . . . . . . . . . . . . . . . 3


2.2 Laser tracker, printed with permission byLeica Geosystems [2007] 4
2.3 Laser triangulation sensor, Wikimedia Commons . . . . . . . . 5
3.1 Electron energy band structure, Wikimedia Commons . . . . . 7
3.2 Diagram of the rst ruby laser, Wikipedia . . . . . . . . . . . . 8
4.1 Two lines intersecting in one point, Petter Johansson . . . . . . 10
4.2 Three planes intersecting in one point, Petter Johansson . . . . 11
4.3 Photodiode lowered into a cone, Petter Johansson . . . . . . . . 12
4.4 Laser plane sweeping over an array, Petter Johansson . . . . . . 13
4.5 A EAN-13 barcode, Wikipedia . . . . . . . . . . . . . . . . . . 14
6.1 Model of how the laserbeam interact with the plane, Petter Jo-
hansson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.2 Model of how the robot react on rotate and translate commands,
Petter Johansson . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.3 Model of the controller program, Petter Johansson . . . . . . . 21
6.4 Beam path over the plane, Petter Johansson . . . . . . . . . . . 22
6.5 State chart of the controller program, Petter Johansson . . . . 24
6.6 The main simulink model, connecting the dierent submodels,
Petter Johansson . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.7 Simulation results of how x, y and sensor output varies with
time, Petter Johansson . . . . . . . . . . . . . . . . . . . . . . . 26
6.8 Search pattern when the black area is rotated π /6, Petter Jo-
hansson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.9 Search pattern when the black area is rotated −π /6, Petter
Johansson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.1 Received light intensity, Keyence LV Series Instruction Manual 29
7.2 Included mounting bracket, Petter Johansson . . . . . . . . . . 30
7.3 A black square, Petter Johansson . . . . . . . . . . . . . . . . . 31
7.4 The unknown x direction, Petter Johansson . . . . . . . . . . . 33
7.5 The mean lines, Petter Johansson . . . . . . . . . . . . . . . . . 33
8.1 The four planes intersecting, Petter Johansson . . . . . . . . . . 36

42
List of Figures 43

B.1 The adapter between the mounting bracket and the robot wrist,
Petter Johansson . . . . . . . . . . . . . . . . . . . . . . . . . . 53
C.1 The four planes intersecting, Petter Johansson . . . . . . . . . . 71
C.2 The four planes intersecting, Petter Johansson . . . . . . . . . . 72
C.3 View from the x-y plane, Petter Johansson . . . . . . . . . . . . 73
C.4 View from the x-z plane, Petter Johansson . . . . . . . . . . . . 74
C.5 View from the y-z plane, Petter Johansson . . . . . . . . . . . . 75
44 List of Figures
Appendix A

Simulation models

A.1 Sensor model


Intersection in plane
function [x,y]= fcn(T1,T2,T3,T4)
% This block calculates the intersection of
% the plane and the beam

% calculate transformation matrixes


A = T1*T2*T3;
B = T4;

% extract the plane


a = B(1:3,4);
b = B(1:3,1);
c = B(1:3,2);

% extract the line


d = A(1:3,4);
e = A(1:3,1);

% calculate intersection
% a + u*b + w*c = d + t*e
k = inv([b,c,-e])*(d-a);

% return position in plane


x = k(1);
y = k(2);

45
46 Appendix A. Simulation models

White?
function out = fcn(box,x,y)

% This block determines if


% the position x, y is white or black

% input
sx = box(1); % size in x direction (mm)
sy = box(2); % size in y direction (mm)

% evaluation & output


if(x >= 0 && x <= sx && y >= 0 && y <= sy)
out = false;
else
out = true;
end
A.2. Robot model 47

A.2 Robot model


Move robot
function wrist_new = fcn(rot, trans, wrist_old)
% This block rotates and translates
% the wrist_old transformation matrix

% rotate
wrist_old
= wrist_old * rotx(rot(1)) * roty(rot(2)) * rotz(rot(3));

% translate
wrist_old = wrist_old * [eye(3),trans;0,0,0,1];

% output
wrist_new = wrist_old;
48 Appendix A. Simulation models

% rotation about X axis


% Copyright (C) 1993-2002, by Peter I. Corke
function r = rotx(t)
ct = cos(t);
st = sin(t);
r = [1 0 0 0
0 ct -st 0
0 st ct 0
0 0 0 1];

% rotation about Y axis


% Copyright (C) 1993-2002, by Peter I. Corke
function r = roty(t)
ct = cos(t);
st = sin(t);
r = [ct 0 st 0
0 1 0 0
-st 0 ct 0
0 0 0 1];

% rotation about Z axis


% Copyright (C) 1993-2002, by Peter I. Corke
function r = rotz(t)
ct = cos(t);
st = sin(t);
r = [ct -st 0 0
st ct 0 0
0 0 1 0
0 0 0 1];
A.3. Controller model 49

A.3 Controller model


Control logic
function [translate,rotate,new]= fcn(In,count_max,pos_max,old)
% This block evaluate the sensor signal and controls the robot

% main internal state, controlls what to do next


state_old = old(1);

% counts the number of rotation steps since black


count_old = old(2);

% counts the number of translations


pos_old = old(3);

switch state_old

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% rotate to horizontal line (state = 0) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 0
% when white
if(In)
translate = [0;0;0];
rotate = [0;-pi/(2*1800);pi/(2*1800)];
% when black
else
translate = [0;0;0];
rotate = [0;0;0];
end

% change state
if(In)
state_new = 0;
else
state_new = 1;
end
count_new = 0;
50 Appendix A. Simulation models

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% follow horizontal line (state = 1) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 1
% when white
if(In)
translate = [0;0;0];
rotate = [0;pi/(2*1800);pi/(2*1800)];
count_new = count_old + 1;
% when black
else
translate = [0;0;0];
rotate = [0;pi/(2*1800);-pi/(2*1800)];
count_new = 0;
end

% change state
if(count_old > count_max)
state_new = 2;
count_new = 0;
else
state_new = 1;
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% rotate to vertical line (state = 2) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 2
% when white
if(In)
translate = [0;0;0];
rotate = [0;-pi/(2*1800);0];
% when black
else
translate = [0;0;0];
rotate = [0;0;0];
end

% change state
if(In)
state_new = 2;
else
state_new = 3;
end
count_new = 0;
A.3. Controller model 51

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% follow vertical line (state = 3) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 3
% when white
if(In)
translate = [0;0;0];
rotate = [0;-pi/(2*1800);-pi/(2*1800)];
count_new = count_old + 1;
% when black
else
translate = [0;0;0];
rotate = [0;pi/(2*1800);-pi/(2*1800)];
count_new = 0;
end

% change state
if(count_old > count_max && pos_old < pos_max)
state_new = 4;
count_new = 0;
elseif(count_old > count_max && pos_old >= pos_max)
state_new = 10; % STOP
else
state_new = 3;
end
52 Appendix A. Simulation models

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% translate to horizontal line (state = 4) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 4

% when white
if(In)
translate = [0;1;0.5];
rotate = [0;0;0];
% when black
else
translate = [0;0;0];
rotate = [0;0;0];
end

% change state
if(In)
state_new = 4;
else
state_new = 1;
pos_old = pos_old + 1;
end
count_new = 0;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% in the end (state = x) %
% => don't change anything %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
otherwise
state_new = state_old;
count_new = count_old;
rotate = [0;0;0];
translate = [0;0;0];
end

% update the number of translations


pos_new = pos_old;

% Since simulink-block can not output 3x3 matrix


% and single number at the same time:
new = [state_new; count_new; pos_new];
Appendix B

Implementation

B.1 Adapter drawing

45
8
5

2x M4
90

70

R7

5
R2

4x 8
konkurrenter eller eljest obehöriga personer.
Denna ritning får icke utan vårt medgivande
kopieras, förevisas för eller utlåmnas till

xxx xxx
Pos Ant Artikel/Modell Benämning Material Dimension
Konstr Ritad Revision Vikt (kg) Skala Format Blad.nr
xxx 1:1 A3 1( 1)
Artikel/Modell Datum

Machine Design ADAPTER2


Benämning Ritning
05-Jan-07
LTH
xxx ADAPTER2_BIG2

Figure B.1: The adapter between the mounting bracket and the robot wrist

53
54 Appendix B. Implementation

B.2 Rapid code


%%%
VERSION:1
LANGUAGE:ENGLISH
%%%

MODULE lasercb
!!!!!!!!!!!!!!!!!!!!!!!!
! variables in main: !
!----------------------!
! beam !
! !
! variables in cal: !
!----------------------!
VAR num rot_step;
VAR num trans_step;
VAR speeddata go_speed;
VAR speeddata trace_speed;
VAR num count_max;
VAR num pos_max;
VAR robtarget p1;
VAR num state;
VAR num pos;
VAR num count;
VAR intnum black_white;
! variables in saver !
!----------------------!
VAR iodev logfile;
VAR robtarget curr_pos;
VAR robtarget psave;
!!!!!!!!!!!!!!!!!!!!!!!!
! transformation from wrist to beam.
! Must be calibrated in some way. This is just a start:
PERS tooldata
beam:=[TRUE,[[0,0,165],[1,0,0,0]],[0.5,[0,0,5],[1,0,0,0],0,0,0]];
B.2. Rapid code 55

!%%%%%%%%%%%%%%%%
!% Main program %
!%%%%%%%%%%%%%%%%
PROC main()
curr_pos:= CRobT(\Tool:=beam\WObj:=wobj0);
! "find " current position
cal (RelTool(curr_pos,200,0,0\Ry:=-90));
ENDPROC
56 Appendix B. Implementation

!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% Calibration routine, pin is the target position %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
PROC cal(
robtarget pin)

!%%%%%%%%%%%%
!% Settings %
!%%%%%%%%%%%%
! rotational stepsize (degrees)
rot_step:=0.05;
! translational stepsize (mm)
trans_step:=0.2;
! speed when goto next line
go_speed:=v10;
! speed when following line
trace_speed:=v10;
!when to stop run away
count_max:=100;
! = number of data sets - 1
pos_max:=1;
!%%%%%%%%
!% Init %
!%%%%%%%%
! showposition(pin)
p1:=RelTool(pin,0,0,200\Ry:=90);
MoveL p1,go_speed,fine,beam;
! wait for 2 seconds in the showposition
WaitTime\InPos,2;
! startposition(pin)
p1:=RelTool(pin,0,-25,200\Ry:=90);
MoveL p1,go_speed,fine,beam;
! start in state 0
state:=0;
pos:=0;
count:=0;
! open logfile
Open "HOME:"\File:="LOGFILE1.DOC",logfile\Write;
! init interrupt
CONNECT black_white WITH saver;
ISignalDI digIn1,edge,black_white;
B.2. Rapid code 57

WHILE TRUE DO
TEST state
CASE 0:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% rotate to horizontal line (state = 0) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=-rot_step\Rz:=rot_step);
!translate = [0;0;0];
!rotate = [0;-pi/(2*1800);pi/(2*1800)];
! when black
ELSE
p1:=p1;
!translate = [0;0;0];
!rotate = [0;0;0];
ENDIF
MoveL p1,go_speed,fine,beam;
! change state
IF digIn1=1 THEN
state:=0;
ELSE
state:=1;
ENDIF
count:=0;
58 Appendix B. Implementation

CASE 1:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% follow horizontal line (state = 1) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=rot_step\Rz:=rot_step);
!translate = [0;0;0];
!rotate = [0;pi/(2*1800);pi/(2*1800)];
count:=count+1;
! when black
ELSE
p1:=RelTool(p1,0,0,0\Ry:=rot_step\Rz:=-rot_step);
!translate = [0;0;0];
!rotate = [0;pi/(2*1800);-pi/(2*1800)];
count:=0;
ENDIF
MoveL p1,trace_speed,fine,beam;
! change state
IF count>count_max THEN
state:=2;
count:=0;
ELSE
state:=1;
ENDIF
B.2. Rapid code 59

CASE 2:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% rotate to vertical line (state = 2) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=-rot_step);
!translate = [0;0;0];
!rotate = [0;-pi/(2*1800);0];
! when black
ELSE
p1:=p1;
!translate = [0;0;0];
!rotate = [0;0;0];
ENDIF
MoveL p1,go_speed,fine,beam;
! change state
IF digIn1=1 THEN
state:=2;
ELSE
state:=3;
ENDIF
count:=0;
60 Appendix B. Implementation

CASE 3:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% follow vertical line (state = 3) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=-rot_step\Rz:=-rot_step);
!translate = [0;0;0];
!rotate = [0;-pi/(2*1800);-pi/(2*1800)];
count:=count+1;
! when black
ELSE
p1:=RelTool(p1,0,0,0\Ry:=rot_step\Rz:=-rot_step);
!translate = [0;0;0];
!rotate = [0;pi/(2*1800);-pi/(2*1800)];
count:=0;
ENDIF
MoveL p1,trace_speed,fine,beam;
! change state
IF count>count_max AND pos<pos_max THEN
!if(count_old > count_max && pos_old < pos_max)
state:=4;
count:=0;
ELSEIF count>count_max AND pos>=pos_max THEN
!elseif(count_old > count_max && pos_old >= pos_max)
state:=10;
! STOP
ELSE
state:=3;
ENDIF
B.2. Rapid code 61

CASE 4:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% translate to horizontal line (state = 4) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,trans_step,0.5*trans_step);
!translate = [0;1;0];
!rotate = [0;0;0];
! when black
ELSE
p1:=p1;
!translate = [0;0;0];
!rotate = [0;0;0];
ENDIF
MoveL p1,go_speed,fine,beam;
! change state
IF digIn1=1 THEN
state:=4;
ELSE
state:=1;
pos:=pos+1;
ENDIF
count:=0;
DEFAULT:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% in the end (state = x) %
!% => shut down %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!shut down function
!disable interrupt
IDelete black_white;
!close logfile
Close logfile;
RETURN;
ENDTEST
ENDWHILE
ENDPROC
62 Appendix B. Implementation

!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% Interupt routine, save orientation data to file %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
TRAP saver
! save transformation from world coordinate system (wobj0) to
! robot wrist coordinate system (tool0)
psave:=CRobT(\Tool:=tool0\WObj:=wobj0);
Write logfile," "\Num:=state\NoNewLine;
Write logfile," "\Num:=pos\NoNewLine;
Write logfile," "\Num:=digIn1\NoNewLine;
Write logfile," "\Num:=psave.trans.x\NoNewLine;
Write logfile," "\Num:=psave.trans.y\NoNewLine;
Write logfile," "\Num:=psave.trans.z\NoNewLine;
Write logfile," "\Num:=psave.rot.q1\NoNewLine;
Write logfile," "\Num:=psave.rot.q2\NoNewLine;
Write logfile," "\Num:=psave.rot.q3\NoNewLine;
Write logfile," "\Num:=psave.rot.q4;
ENDTRAP
ENDMODULE
B.3. Data processing 63

B.3 Data processing


main.m
%
% Main program:
% - plots the measured data
% - calculates the unknown coordinate system
% - plots the unknown coordinate system
%
% Copyright (C) 2007 Petter Johansson

%% settings
%there are four outliers in this first run:
%logfile = importdata('LOGFILE1.DOC'); % import data from textfile

%the second run is ok:


logfile = importdata('LOGFILE2.DOC'); % import data from textfile

Tsensor = [eye(3),[0;0;165];0,0,0,1]; % true location of the sensor!?


on = ones(2,4); % plot plane m? on(k,m)!
blackandorwhite = [0,1]; % sensor value(s)

%% variable declarations
P = zeros(3,4); % known origo(s)
R1 = zeros(3,4); % known direction 1 of plane(s)
R2 = zeros(3,4); % known direction 2 of plane(s)
X0 = zeros(3,length(blackandorwhite));% unknown origo(s)
Rx = zeros(3,length(blackandorwhite));% unknown x direction(s)
Ry = zeros(3,length(blackandorwhite));% unknown y direction(s)
Z = zeros(3,length(blackandorwhite));% unknown z direction(s)
64 Appendix B. Implementation

%% plots and calculations


figure
hold on
for k = blackandorwhite % sensor value
m = 1; % keep track of plane number (1..4)
for i = [1,3] % select unknown x (i=1) or y (i=3) vector
for j = [0,1]; % select dataset
% get data
log = extract_vector(logfile, Tsensor, i, j, k);

% plot the directions of plane m


if(on(k+1,m)==1)
if m == 1
color = 'r';
elseif m == 2
color = 'g';
elseif m == 3
color = 'b';
elseif m == 4
color = 'c';
end
testfcn(log(:,7:9)',log(:,4:6)',300,color);
end

% mean
P(:,m) = mean(log(:,7:9))';
R1(:,m) = mean(log(1:round(size(log,1)/2),4:6))';
R2(:,m) = mean(log(round(size(log,1)/2)+1:end,4:6))';

m = m + 1; % update plane number


end % end j loop
end % end i loop
B.3. Data processing 65

%% calculate X0
abcd = gen_plane(P,R1,R2); % det([P';R1';R2'])=0
abc = abcd(1:3,:)'; % =>
d = abcd(4,:)'; % ax + bx + cx + d = 0
X0(:,k+1) = abc\-d; % abc*X0 = -d

%% plot X0
scatter3(X0(1,k+1),X0(2,k+1),X0(3,k+1),'+');

%% calculate Hessian normal form


np = hessian(abcd);
n = np(1:3,:); % unit normal vector
p = np(4,:); % constant

%% m*X0 = b
mx = [n(:,1),n(:,2)];
my = [n(:,3),n(:,4)];

%% find the directions of the plane intersections


Rx(:,k+1) = -null(mx');
Ry(:,k+1) = -null(my');

%% plot the intersections


testfcn(X0(:,k+1),Rx(:,k+1),50, 'k');
testfcn(X0(:,k+1),Ry(:,k+1),50, 'k');

%% find the Z directions


Z(:,k+1) =
cross(Rx(:,k+1),Ry(:,k+1))/norm(cross(Rx(:,k+1),Ry(:,k+1)));

%% plot the Z directions


testfcn(X0(:,k+1),Z(:,k+1),100, 'k');
end % end k loop (sensor value)
hold off;
66 Appendix B. Implementation

extract_vector.m
%
% Extracts vector data from a logfile
%
% y = extract_vector(file, Tsensor, state, pos, digIn1)
%
% 1. Extracts the rows from logfile that correspond to:
% state, pos, digIn1
%
% 2. Uses transformation matrix Tsensor to transform from
% wrist coordinates into tool coordinates
%
% 3. Returns a matrix n x 9 matrix containing
% state, pos, digIn, r11, r21, r31, px, py, pz
%
% Copyright (C) 2007 Petter Johansson

function y = extract_vector(logfile, Tsensor, state, pos, digIn1)

linedata = zeros(length(logfile),9); % temporary output


j = 0; % number of used rows in
linedata

for i = 1:length(logfile)
if(logfile(i,1)==state && logfile(i,2)==pos && logfile(i,3)==digIn1)
% 1 convert from quaternion to homogeneous transform
tr = q2tr(logfile(i,7:10));
% recreate transformation matrix from robot base to robot wrist
Twrist = [tr(1:3,1:3),logfile(i,4:6)';0,0,0,1];

% 2 calculate transformation matrix from robot base to sensor


T = Twrist * Tsensor;

% 3 extract the beam direction (x - direction) and origin


r = T(1:3,1)';
p = T(1:3,4)';

j = j + 1; % save data in next free row


linedata(j,:) = [logfile(i,1:3), r, p];
end
end

% return the data as state, pos, digIn, r11, r21, r31, px, py, pz
y = linedata(1:j,:);
B.3. Data processing 67

testfcn.m
%
% Plot line(s) starting in x in the m direction
%
% Copyright (C) 2007 Petter Johansson

function y = testfcn(x,m,length,linesp)

f = zeros(2,1);
n = size(x);
for j=1:n(2)
for i=0:1
f(i+1,1:3) = x(:,j) + i*length*m(:,j);
end
plot3 (f(:,1),f(:,2),f(:,3), linesp); figure(gcf)
end
y = f;

gen_plane.m
%
% Find the plane(s) that pass through P(i)
% and is parallel to R1(i) and R2(i)
%
% det([P(i)';R1(i)';R2(i)']) = 0
% =>
% ax + bx + cx + d = 0
%
% Copyright (C) 2007 Petter Johansson

function abcd = gen_plane(P,R1,R2)

a = zeros(1,length(P));
b = zeros(1,length(P));
c = zeros(1,length(P));
d = zeros(1,length(P));

for i = 1:length(P)
a(i) = R1(2,i)*R2(3,i)-R2(2,i)*R1(3,i); % beta1*gamma2-beta2*gamma1
b(i) =-R1(1,i)*R2(3,i)+R2(1,i)*R1(3,i); %-alfa1*gamma2+alfa2*gamma1
c(i) = R1(1,i)*R2(2,i)-R2(1,i)*R1(2,i); % alfa1*beta2-alfa2*beta1
d(i) =-P(1,i)*a(i)-P(2,i)*b(i)-P(3,i)*c(i); %-x0*a-y0*b-z0*c
end

abcd = [a;b;c;d]; % output


68 Appendix B. Implementation

hessian.m
%
% Tranform the plane:
% ax + bx + cx + d = 0
% into Hessian normal form:
% n*x0=-p
%
% Copyright (C) 2007 Petter Johansson

function np = hessian(abcd)

a = abcd(1,:);
b = abcd(2,:);
c = abcd(3,:);
d = abcd(4,:);

nx = zeros(1,length(a));
ny = zeros(1,length(a));
nz = zeros(1,length(a));
p = zeros(1,length(a));

for i = 1:length(a)
nx(i) = a(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
ny(i) = b(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
nz(i) = c(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
p(i) = d(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
end

np = [nx;ny;nz;p];
B.3. Data processing 69

q2tr.m
% Q2TR Convert unit-quaternion to homogeneous transform
% T = q2tr(Q)
% Return the rotational homogeneous transform corresponding
% to the unit quaternion Q.
% Copyright (C) 1993 Peter Corke

function t = q2tr(q)

q = double(q);
s = q(1);
x = q(2);
y = q(3);
z = q(4);

r = [ 1-2*(y^2+z^2) 2*(x*y-s*z) 2*(x*z+s*y)


2*(x*y+s*z) 1-2*(x^2+z^2) 2*(y*z-s*x)
2*(x*z-s*y) 2*(y*z+s*x) 1-2*(x^2+y^2) ];
t = eye(4,4);
t(1:3,1:3) = r;
t(4,4) = 1;
70 Appendix B. Implementation
Appendix C

Results

C.1 3D plots

900

850

800

750

700

650

600

550 1050
1000
720 710 700 690 680 670 660 650 950
640

Figure C.1: The four planes intersecting

71
72 Appendix C. Results

1000
800
600

950

960

970
720
980
710

990 700

690
1000
680

1010 670

660
1020
650

1030 640

Figure C.2: The four planes intersecting


C.1. 3D plots 73

720

710

700

690

680

670

660

650

640
950 960 970 980 990 1000 1010 1020 1030

Figure C.3: View from the x-y plane


74 Appendix C. Results

900

850

800

750

700

650

600

550
950 960 970 980 990 1000 1010 1020 1030

Figure C.4: View from the x-z plane


C.1. 3D plots 75

900

850

800

750

700

650

600

550
640 650 660 670 680 690 700 710 720

Figure C.5: View from the y-z plane


76 Appendix C. Results

Anda mungkin juga menyukai