Anda di halaman 1dari 35

Miles Dobbie SID: 200595298

Supervisor: Professor Levesley

Final Project Report

Mech 3810

Recording and Replicating

Human Arm Motion for use
within Physiotherapy
A system has been developed which combines two previous
projects: a LabVIEW interface for a Microsoft Kinect camera and a
two-link robotic arm. Several LabVIEW programs were developed,
allowing motion recorded by the Kinect camera to be reproduced
using the robotic arm. The motion was plotted on paper by the
robotic arm, which was then compared to a graph of the original
motion recorded by the Kinect. The two were visually similar, with the
final motion retaining the same overall shape as the original motion.
However some motions are not replicable and other components of a
motion proved difficult to replicate accurately, notably vertical lines.
Physical limitations with the robotic arm were deemed to be the
cause. Such a system could be used as a tool in physiotherapeutic
care, bringing significant benefits to the patient and the therapist. The
Kinect has many capabilities that could be further investigated,
including the possibility of using it to administer physiotherapeutic
exercises remotely.

I would like to thank Professor M. Levesley and Dr A. Jackson for
their guidance throughout this project along with Mr D. Readman for
his help within the control lab.
This project was only made possible thanks to the previous work
completed by the members of the Kinesthesia team: B. Cotter, C.
Norman and D. Clark as well the summer project students: R. Coe,
N. Hussain, A. Oladokun, W. Stokes, J. Skyes and S. Yang.




Aims & Objectives
Literature Review
Kinect Camera
Physiotherapeutic Robots
Existing Robotic Arm
LabVIEW interface
System Development
Motion Capture
Analysis of Original Robotic
Transformation of Kinect Output
to Robot Input
Replication of Motion
Discussion & Conclusions
Future Work
Enhancements for
Existing Systems
Development of
Advanced Applications

Appendix 1
Appendix 2

Robotic Arm Wiring Diagram





Robots are becoming commonplace in physiotherapy and are

regularly incorporated in to the rehabilitation routines of a patient.
The benefits to a patient undergoing orthopaedic rehabilitation as
well as the physiotherapist in delivering consistent treatment have
well been documented. A robotic arm can be used during exercises
to provide muscular resistance to the patient and draw their arm
through movements that were lost after injury or illness. The system I
have developed as part of this project is an early demonstration of
how powerful consumer level technology can be combined with
existing hardware, allowing a physiotherapist to remotely administer
exercises for orthopaedic rehabilitation, in particularly for the arm. In









demonstration of an exercise on a patient, the physiotherapist would

simply move their arm through a range of motions that the patient
wishes to recover. The system records this motion and a robotic arm
will replicate it on the patient gripping the end. This directs the
patients arm through the range of movements that they would
normally be unable to perform on their own.

The project consists of two parts: firstly recording the motion;

using a Microsoft Kinect camera (Microsoft Corporation, Redmond,
U.S.), secondly and reproducing it on a robotic arm. Other existing
motion tracking systems are overly accurate for the purposes of this
project. Although the Microsoft Kinect device was not initially
designed for applications beyond computer gaming it has proven to
be a very capable piece of hardware. The project follows on from
previous students work. Kinesthesia (Cotter, Clark, & Norman,
2012a) an interface between the Kinect camera and LabVIEW
software (National Instruments, Texas, U.S.), the first of its kind was
developed as part of a MEng final year group project. The robotic


arm and the LabVIEW program used to control it were built in

summer 2012. As well as making use of this robotic arm much of the
code was used as a template for the code written as part of this
project, which essentially consists of producing the missing link
between the motion capture and the replication sides of the system.

The low cost of the Kinect camera, a widely available

consumer electronic, means that a system of this sort, delivering
remote rehabilitation exercises will be accessible to a far greater
number of patients than would have been possible with existing
motion capture systems, which also have other disadvantages. The
Kinect can be used as an add-on to existing rehabilitative robot
systems for minimal additional expense. This proposal also extends
the scope for rehabilitative robots to be installed at patients own
homes rather than at a clinic. By eliminating the need to travel to a
clinic (which may be problematic if the patient has been disabled) or
the physiotherapist having to visit the patient each session, exercise
sessions can be performed more frequently. This will reduce not only
the cost of treatment but also the recovery time.

1.2 Aims and Objectives

1.21 Aim
Develop a control interface between a Kinect camera and a robotic
arm which could in future be used as a physiotherapeutic tool
administering rehabilitative exercises remotely to a patient.

1.22 Objectives

Create a LabVIEW motion tracking program using a Kinect

camera, specifically for upper limbs


Convert the Kinect output to a suitable robotic arm input

(within the capabilities of the arm)

Program robotic arm to run from the Kinect output

Literature Review

This project spans four main topics: the Microsoft Kinect camera,
LabVIEW as an interface, robotic arms, and upper limb rehabilitation.
The work carried out for the project has established links between
each of these elements. Reasoning behind the hardware and
software choices made will be given.

2.1 Kinect Camera

The Microsoft Kinect camera was initially developed as a
motion sensor accessory to the popular XBOX games console. It
allows player interaction with a game and there is no requirement for
a traditional hand held controller. Since it is a consumer product
aimed at mass market it has a much lower cost of 199 (Microsoft
Online Store, 2013) compared to existing motion capture systems. It
is a relatively compact freestanding piece of hardware and along with
the quality of the Kinects data output its possible applications
beyond gaming are becoming apparent.

Microsofts commitment to utilisation of the device in other

applications beyond gaming is demonstrated by their development of
a Kinect camera designed specifically for use with Windows (Kinect
for Windows, 2012) along side an SDK (software development kit).
The Kinect was partly chosen because of its powerful combination of
an infared depth camera and a RGB camera (Kinect for Windows
Sensor Specifications, 2012). Extracting precise data from just a
depth camera is extremely difficult (Henry, Krainin, Herbst, Ren, &


Fox, 2012), however by combining it with visual information from the

RGB camera, such as the fact that contrasting colours often
represent different objects, the depth calculations are much more
accurate. These combined cameras are termed RBG-D cameras.

Figure 1. A comparison of RGB and depth information acquired

by the Kinect camera (Henry et al., 2012)

As the Kinect was only released in 2010 the accuracy of its

data is still being examined, but initial investigations have noted
minimal error when comparing measurements obtained with the
Kinect and other established motion tracking systems. The team of
students behind the Kinesthesia interface tested the skeletal tracking
abilities of the Kinect and compared it to the industry standard
Optotrak marker tracking system, housed within the Mechanical
Engineering department. Their research has highlighted that the
Kinect is accurate enough for the purposes of most rehabilitation
exercises. These normally consist of slow gentle motions, which are
strengths of the Kinects tracking ability (Cotter et al., 2012a). Their
research has also examined the accuracy of the Kinects depth
camera and has shown exponential increase in the error, as distance
from the camera increases. A similar study has concluded that this
could be due to the density of data points (i.e. resolution) on the
depth map reducing as distance of the Kinect from the measured
object increases (Khoshelham, 2011). The same study has


suggested that a distance between 1 to 3 meters should be used in

depth mapping applications.

The full capabilities of the Kinect are not used in the project,
with it being used to track only the right and left wrist. However
further controls could be implemented in to the motion capture
program of the project. Natural gestures such as nodding and
shaking can be recognised by the Kinect using its depth camera with
little computing power (Biswas & Basu, 2011), corresponding to yes
and no commands. The motion tracking program could also be
developed to record lower limb motion, it has been demonstrated that
the Kinect could be used to evaluate foot position (Mentiplay & Clark,

2.2 Physiotherapeutic Robots

The use of robotic systems in physiotherapy, during rehabilitation
after an injury or illness is becoming more commonplace. These
facilitate a patients recovery by helping them to move through the
range of movements that were lost. Their uptake by physiotherapists
is likely to increase, since a reduction in healthcare budgets and a
general increased burden on healthcare systems is being witnessed
in countries across the world. The initial cost of developing and
implementing such systems may be high but the possibility of more
regular treatment sessions with fewer physiotherapist contact hours
needed is attractive. Over time this will compensate for the initial
outlay. Such systems usually result in higher quality treatment, since
using a robotic arm in rehabilitation exercises could provide
programmable resistance and detection of unwanted movement (Lu,
2011). Precise control during highly repetitive exercises is a feature a


robot could provide that is not possible consistently with a

physiotherapists manual guidance.

Many studies have assessed the improvements in patients

upper arm movement achieved by using advanced robots in
physiotherapy. Although there is debate on which exercises that are






techniques, there is consensus that when performed correctly

significant improvements resulting from robot use exist (Kwakkel,
Kollen, & Krebs, 2008). These improvements seem to arise in one of
two ways: firstly by being used as artificial therapists, and secondly
as measuring devices.

iPAM is a system developed to assist people engaging in

upper arm rehabilitation after a stroke. It is a more advanced robotic
arm than the one used in this project. However it has undergone
testing with patients to see its acceptability at delivering rehabilitative
exercises. The results provide a perspective from the patients and
physiotherapists, and will be an important tool when considering the
design specifications for the robotic arm to be used in this project.
Patient and physiotherapist interest in such a system is high and
most of those interviewed would like to see it developed further.
However they were uncertain about it being implemented in its
current state, and the arm must be made more comfortable to use as
well quicker to set up. For use to be extended to large scale
healthcare organisations such as the NHS, as well as patients own
homes it must be easily portable (Jackson et al., 2009).

2.3 Existing Robotic Arm

The robotic arm used in this project was built by students with the
purpose of investigating the idea for a second year medical


Original robotic arm design specifications

(Coe et al., 2012)

engineering competition. Building a





applications in mind would coincide

design and manufacturing modules




research in to physiotherapeutic
robots (Coe et al., 2012).
It is a double-jointed two

Must fit in a box of 800 x 600 x

Must be a 2 link robotic arm
powered by two motors and using
two optical encoders for odometry.
Must accurately position the Wiiremote over the photodiode targets in
the correct order.
Must be suitably designed for
clamping to the desk.

degrees of freedom robotic arm






operating within a workspace of

radius 350 mm, built entirely from




Catalogue. It is controlled by a


Figure 2. SolidWorks model of robot (Coe et al., 2012)






programmed in LabVIEW.


Upper arm

Figure 3. Arm fully extended, at starting position

2.4 LabVIEW Interface

LabVIEW is well established within the engineering department at
Leeds as a control interface for many mechatronic systems built by


students. Its highly customisable yet user-friendly interface has been

used in major research projects and is ideal for the control of this
projects robotic arm.

The Kinestesia toolkit consists of a set of VIs


instruments) which allow the three data feeds (RGB, depth and
audio) from a Kinect camera to be accessed within LabVIEW (Cotter
et al., 2012a). However in this project audio will not be used. It was
developed using Microsofts .NET framework and their Kinect SDK.
By using a DLL (dynamic link library) the Kinects data streams can
be used by multiple programs at once. The toolkit follows National
Instruments recommended DAQ structure and its VIs containing the
data streams can simply be dragged in to a LabVIEW program.
Although the Kinect is capable of actively tracking 2 skeletons at a
time (Kinect for Windows Sensor Specifications, 2012), Kinesthesia
tracks just one to improve the programs efficiency.

Figure 4. National Instruments recommended data acquisition

structure (Cotter, Clark, & Norman, 2012b)

The potential for Kinect control of mechatronic systems has

previously been demonstrated by integrating the camera with a
VTOL (vertical take off and landing) demonstration rig. The rig was
powered by two DC motors attached to propellers on each end of a
beam which pivoted on a fulcrum and the motors were controlled by
a LabVIEW program. When a person stands within range of the
Kinect, the angle of their spine is tracked, and via LabVIEW is
converted to two voltages for each DC motor. This rotates the rig to
the same angle as the persons spine.


2.5 Conclusions
Literature has highlighted some of the requirements for the
rehabilitation system proposed in this report, particularly if it to be
used in a clinical environment and as an effective rehabilitation tool.
Patients who trialled various systems noted that it must of course be
comfortable and simple to operate (Kwakkel et al., 2008).
Physiotherapists who observed trials of iPAM also mentioned that it
must be portable and quick and easy to set up (Jackson et al., 2009).

The Kinect camera in particular was chosen as the motion

capture tool for this system due to its low cost, compactness and
robust form. It requires no manual calibration, nor markers attached
to the user (Cotter et al., 2012a). The markers proved to be
inconvenient when used with Optotrak. Eliminating the need for
markers reduces set up time and makes it less intrusive for a
rehabilitation patient, making regular use of the system more likely.
Since it is a RGB-D camera as opposed to purely a colour or depth
camera less computer processing is needed for higher quality
skeletal tracking abilities (Henry et al., 2012). Upon examining the
Kinects tracking abilities, the Kinesthesia team found that more
accurate data was produced from slow and calm movements, of
which rehabilitation exercises mainly consist of.

The robotic arm to be used in the project operates within a

350 mm radius, or 700 mm diameter, which is roughly the average
length of the human arm. Its design takes into consideration a weight
of 1 kg held at the end of the robotic arm (Coe et al., 2012). This
could be similar to the weight of a patients arm, who would grasp on
to the end of the robot. As the robot has two degrees of freedom it is
relatively simple and cheap to build compared to those with more
degrees. All its components were selected from the departments


Design Catalogue, which contains a limited selection of materials.

This demonstrates its simplicity and helps keep the overall cost of the
system down. Since LabVIEW was used as the programming
software when testing the arm, concerns regarding compatibility
issues are reduced. The Kinect controlled VTOL demonstration has
shown that LabVIEW is a suitable control interface for mechatronic

The low cost of a Kinect controlled therapeutic robot would

hopefully prompt a high rate of uptake within physiotherapy, as well
as systems being installed directly at a patients home. The benefits
to accessing the system at home are clear. Additional exercising time
is available, and the cost of care and physiotherapist contact hours
are both reduced.

System Development

Since the development of this system depends on the two earlier

projects, before work began, detailed examinations of both the
Kinesthesia toolkit and the robotic arm hardware and software were
completed in order to evaluate possible implementation methods in
to the new system, including deciding which elements were of no use
to the new system requirements.

3.1 Motion Capture

The Kinesthesia toolkit consists of several subVIs which can be
incorporated in to LabVIEW code to access the Kinects data
streams. The toolkit can simply be downloaded from National
Instruments website. However the installation process proved to be
more complicated. LabVIEW was first configured to run Microsoft
.NET assemblies. In order to access the Kinect a .NET 4.0


framework was installed. A configuration file was created to force

LabVIEW to open .NET assemblies, which without the file, would not








Development Kit was installed. The Kinesthesia toolkit VIs following

installation are placed within LabVIEWs function palette. This is
possible since the Kinesthesia toolkit is packaged similar to other
hardware drivers. Unpacking the toolkit required installing a VI packet
manager. The eventual installation of the actual toolkit proved to be
buggy, with several of LabVIEWs configuration files repeatedly
needed locating manually. However once fully installed their
implementation into customised LabVIEW code is simply a drag and
drop action, as with any of the other functions from the internal
functions palette in LabVIEW.

Not all of the VIs provided in the toolkit were used in the
motion capture system for this project. The depth and RGB data
streams were not used. To capture the position of the skeletal joints
the skeleton data stream was used. All the processing of the image
seen by the Kinects three cameras is completed on board. The fact
that only the skeletal data stream is used by this motion capture
program does not mean that the other data is not used at all. The
skeletal data stream, made up of joint positions is generated on
board the Kinect by using the RGB and depth cameras data.

The skeletal data stream of the Kinect contains

the position of 20 of the users joint coordinates. The
front panel of the motion capture program displays
a render of the skeleton as further reassurance
to the user that their motion is being tracked.
For the purposes of the project only one of the
20 joint coordinates is passed on to the robotic
arm input, the right wrist position. Here only x

Figure 5. Kinect coordinate space


and z values are taken, since the robotic arm operates in the
horizontal 2D x z plane. The output of the Kinect is given in meters,
with 3 decimal places, i.e. an accuracy of 1 mm.

Y values for the left hand are also used internally by the
program to start and stop recording the right hand movement. The
ability to control the program with the users left hand was
implemented to prevent the program recording the users movement
as they enter or exit the Kinects workspace in order to start or stop
the program. When the user clicks run on the computer the Kinect
activates but the program does not begin recording right hand
movement until the user raises then lowers their left hand, allowing
them time to move in to the Kinects line of sight and hold their right
hand at the starting position of the movement. Similarly during
recording raising a left hand will stop the program, without needing to
return to the computer.

Figure 6. Front panel of motion capture program, containing status bar, 3D skeleton render with
joint positions as spheres and a plot of recorded right hand motion,


The code that graphs and exports the right hand motion is
contained in a while loop, controlled by a wait function, which
regulates the speed at which the motion is recorded, with each loop
iteration recording one point. This was set to 100 ms, resulting in 10
points per second being recorded. This value was sufficiently
accurate for the motions recorded, but a manual control could easily
be implemented in future work if deemed necessary. The number of
recorded points and the right hand x and z coordinates are written to
two text files.

3.2 Analysis of Original Robotic Arm

In order to assess the capabilities of the robotic arm the control code
in its existing form was run. The optical encoders that are usually
mounted onto each joint to read the arms angles could not be found.
Two new ones were found and a metal bracket was made to secure
one of them to the main chassis. The voltages to the DC motors are
regulated by a motor control board which itself is regulated by a
CompactRIO controlled by LabVIEW. A wiring diagram is attached in
appendix 1.





programmed to move between 5 points

arranged on the points of a pentagram.
The motion is made up straight lines, with
the resulting output being shaped as a
star. This shape is drawn in only one side
of the robots available workspace which
greatly simplifies the control code but only

Figure 7. Original pentagram output

allows small shape sizes to be drawn. The coordinates for each point
were fixed in the LabVIEW code as an array. The kinematics were


handled by a subVI that read this coordinate array, which also

calculated the coordinates of 50 equidistant points located on the
lines between each point and found the angles the two links of the
robot must travel through to reach each point. The complete code for
the existing robot was based on a template of a LabVIEW project file
used in the 2nd year Mechanical Engineering buggy competition,
during which students use LabVIEW to deploy code to a
CompactRIO which drives the buggys engines. The template
provided a structure which allowed the control code to make use the
encoders and the go button on the switch box, and allows
deployment to the CompactRIO.

By analysing the existing robotic arm and its control code, an

idea of the range of movements the robot could perform was
acquired. The arm was found to have a maximum reach of 350 mm,
made up two 175 mm pieces (named the upper arm and forearm).
The shoulder angle could move through a range of 45 and the
elbow angle, the angle between the upper arm and forearm through

Figure 8. Left: range when fully extended

Right: upper arm at 45 and forearm at -160 (maximum angles)

3.3 Transformation of Kinect Output to Robot Input

Since the range of the robotic arm is fairly limited, it cannot use the
same coordinate system as the Kinect. When considering the z


values of the Kinect output, these become the y values of the robot
input. A LabVIEW program was developed to convert the output of
the Kinect in to an input to the robotic arm. The diameter of the
robots maximum range is 0.7 m, approximately half the equivalent
range of the average human arm. This means that when
transforming the motion the Kinect records in to a motion replicated
by the robotic arm, it must be scaled by a factor of 2. This is done by
multiplying the Kinects output coordinates by 200, which also
converts them from meters into millimeters. Once scaled down, the
LabVIEW program also shifts the coordinates so that the first position
recorded by the Kinect is set in the robots coordinate system to a
datum point (0,350), the position of the end of the arm when fully
extended. On average, more motions starting from this central point
are likely to fit in the robots workspace than at any other points.
Successive points on the motion are then transformed by the same
amount as the first.

One disadvantage with starting at this point is that the end of

the arm is already at its maximum y position of 350 mm, so any
further points of the motion can not exceed this starting point. This is
prevented recording the motion using the Kinect by ensuring that the
motion begins from the starting position by moving towards the
camera, and then not to move further back than the starting position.
After these transformations have been completed the updated set of
coordinates are written to a file, and a plot is displayed. Another
problem with the robotic arm is a limitation in the angle that the
forearm can move through. The angle the forearm moves through
can not change sign during the same motion. Changing sign i.e
moving past 0 would require the end of the arm to pass through a
point at its maximum reach of 350 mm. This point as well as the
surrounding points, are unlikely to reflect the actual movement
recorded by the Kinect. The subVI written to calculate the kinematics


restricts the angle of the forearm to remain either positive or negative

throughout a motion, preventing such points occurring. However this
reduces the available workspace, as seen in Figure 8, where the
forearm angle is negative the workspace when x > 0 is greatly

The program also contains the ability to reduce the total

number of coordinates sent to the robot. This functionality is not
required when operating, but can be used to simplify complex
motion. It can also be used to remove the occasional point where the
Kinect depth camera has misread a z value.

3.4 Replication of Motion

As with the original program controlling the robot that created a
pentagram, the new program reading the Kinect output runs via a
CompactRIO. To allow larger shapes to be drawn than what was
possible with the original pentagram robot, a larger workspace is
used, instead of only using the upper right corner of the workspace.
This, along with the need to ensure that the forearm angle remains
positive or negative throughout a motion required writing a new
angle finder subVI. This calculates the angle the forearm and the
upper arm must move through to reach each coordinate of the
motion. These are the angles taken from the arms initial vertical
position. The formulas used to do so can be found in appendix 2.

The new angle finder subVI is used in the new robot control
program. Here the motion coordinates are read from the file output by
the Kinect to robot conversion program discussed in the previous
section. However in order for the robot to access the coordinate files,
they must firstly be placed on to the CompactRIOs internal memory.
This simply consists of dragging and dropping two files between the


computers hard drive and the CompactRIOs internal memory using

Windows Explorer, and can be completed in a matter of seconds. A
coordinate from the file is read by the robot control program and is
input to the angle finder.

The two encoders attached to the shoulder and elbow of the

arm are used to measure the forearm and upper arm angles from
their initial position. By dividing the shoulder and elbow angles by
0.7, the angles are converted to an encoder value target. As this
whole section of code is embedded in a loop the encoder target is
continuously compared to the live value being read from the robots
encoders. While the live encoder value for either the elbow or
shoulder is not equal to the target value, the respective motor will run
at a power of 10% of its maximum, rotating each link of the arm. The
power is negated to cause the motor to reverse, which is necessary
when the encoder target is smaller than the current live value. Once
the live value equals the target, the power for that particular motor is
set to 0%, and the correct angle is achieved. A tolerance of 3
encoder counts results in the motor power being set to 0% as soon
as the angle is within 2.1 of the target angle. When both the elbow
and shoulder encoder targets are met, the program repeats using the
next point of the motion as the target, and continues until all points
on the motion have been reached.

The program includes an emergency stop function, which is

activated when the emergency stop button on the robot is pressed, or
while travelling to the target angle the robot reaches the edge of its





A simple yet effective method to validate the output motion of the

robot was to attach a pen to the end of the arm and allow it to draw
the motion. Three different motions are presented, which have been
recorded by the Kinect motion capture program, converted to robotic
arm coordinates, and drawn by the robot.
Motion 1.
a. Kinect motion capture

b. Conversion to robot coordinate system

c. Robot output motion


Motion 2.




Motion 3.






The conversion to the robot coordinate system is successful, with the
starting point of the motion being set to (0,350) and the following
points being transformed by the same amount, preserving the aspect
ratio. The overall shape of the motion drawn by the robot is relatively
close to that recorded by the Kinect, however there are
discrepancies. In particular it is obvious that the robot struggles to
recreate vertical lines. This difficulty was further highlighted when a
set of test coordinates, where x remained constant and y decreased
linearly from 350 to 275, were used as the robots input. As clear in
figure 9. below, the end of the arm judders whilst moving vertically.
Also small single point deviations from the general course of the
motion are often not registered by the robot as seen in Figure 10.

Figure 10. Comparison of Kinect output and

robot drawing, of a small deviation at end of
motion 3

Figure 9. Vertical line,

where x = 0, 350 > y > 275


Discussion and Conclusions

By fulfilling each objective set out in the introduction, the aim of the
project: to develop a control interface between a Kinect camera and
a robotic arm, has largely been met, as can be seen in the results. A
motion is recorded by the Kinect motion capture program, and it is
replicated by the robotic arm.

The motion tracking program, which makes use of the Kinect

camera is as accurate as the Kinect itself, with the only potential
degradation in quality resulting from the sampling rate not being
frequent enough. For this particular motion capture application
sampling at rate of 10 Hz is sufficient. This can easily be altered
within the code, however much higher sampling rates may require
computing power. As discussed within the literature review, the
Kinect has been tested against other more advanced tracking
systems such as the Optotrak system, and has performed well.

The program written to convert the Kinects output to

coordinates suitable for the robotic arm is adequate, as long as the z
values (distance from Kinect) of the recorded motion are all smaller
than the first recorded value. It would not be difficult to implement a
function in the program that coerces any values greater than the
initial value (0, 350) to below it, however this would greatly alter the
overall resulting motion. A system that is limited in what motions in
can replicate but does so with relatively accurate results, was
deemed to be of more use than a system which could respond to any
motion performed but often inaccurately. Currently if a motion
exceeds the range of the robot, whist attempting to draw it the robot
would stop when it reaches its maximum range. It would be useful to
have an indicator within the conversion program that displays an alert


when the recorded motion is likely to exceed the range of the robot,
without having to operate the robot to find out.

The main limiting factor to improving overall performance of

the system is the robotic arm. Kinect operation of a simple robotic
arm, built from a small selection of components, has been
successful. However several flaws make this particular robot unlikely
to be used in more advanced applications, requiring greater
accuracy, such as in rehabilitative exercises. As seen in Figure 9 in
the results section, the robot can not accurately draw a vertical line.
This is most likely due to two aspects of the robots design. Firstly
there is a large amount of backlash in the gearing of the DC motors.
What may only seem to be a small amount slack in motors drive
shaft, will propagate into a large displacement of the end of the arm.
This backlash is further amplified by there being two DC motors.
Secondly the robot can not move directly in the y direction, since it
relies on radial motion to move the end of the arm. To move in a
vertical line the end of the arm will zigzags along the desired path.
This juddering motion has been reduced to as small as possible, by
slowing the motors and increasing the number of coordinates along
the vertical line. However the requirement of having a tolerance limits
the possible improvements to vertical line motion. When fully
extended, the tolerance of 3 encoder counts written in the code, or
2.1, results in an end x position of approximately 350tan(2.1) =
12.8 mm. This does not even take in to account the backlash in the
system, which would further increase the range of end positions
deemed as being in range of the target angle.

Another point mentioned during the analysis of the results is

the fact that some small deviations recorded by the Kinect are not
produced by the robot, as seen in Figure 10. This again is likely due
to the tolerances applied. A deviation may be small enough that it in


fact falls within the tolerance for the previous point and no further
movement is carried out. This is however somewhat beneficial to the
accuracy of the system, since is it fairly likely that such deviations are
the result of an occasional inaccurate Kinect depth calculation. These
small deviations would in a more accurate system, result in a jerky
motion, which is generally undesirable in a physiotherapeutic

Finally the fairly crude method used to draw the robots

motion, by attaching a pen to the end using Sellotape undoubtedly
contributed to some of the systems inaccuracies.

5.1 Future Work

There are a large number of ways that the system developed in this
project could be improved. The Kinect is a powerful piece of
hardware, and only a small proportion of its capabilities have been
utilised in this system. Its ability to control a mechatronic system via
LabVIEW has been demonstrated by this project. Since LabVIEW is
used to control many other systems within the engineering
department in Leeds, it is hoped that the Kinect camera could be
considered as a viable input for even more advanced systems than
the robot used in this project.

5.1.1 Performance Enhancements for Existing System

At the beginning of the project, installing a fully functional version of
the Kinesthesia toolkit proved problematic, and time could be spent
on improving the installation. One possibility is to include the .NET
configuration file in the installer, eliminating the need for the user to
create one by their self.

The code in the programs developed for the project could be

simplified and made more efficient. Several now redundant


components could be removed. When operating with a relatively

small number of points, improvements to the code are unlikely to
result in a much greater increase in program speed. However it is
certainly good practise to simplify the code as much as possible,
especially if it is to be used in more advanced applications. The code
is also well documented, but the descriptions of each section of code
could still be improved further in order to assist other users analysing
the code in the future. The full process from recording a motion to
reproducing it on the robot could also be simplified and the number of
steps reduced. For example the first stage; using the motion capture
program, and the second stage; using the Kinect to robot coordinates
conversion program, could be combined in to one program. Having to
manually copy over the coordinate files from the PCs memory to the
CompactRIO could be avoided through use of an FTP reference
within the LabVIEW code. This is a suitable improvement then the
robots CompactRIO and the PC running the motion capture program
are both on the same network. This solution though would not work
when the CompactRIO is operating remotely from the Kinects PC, as
would be the case with remotely administered physiotherapy (or telephysiotherapy). In this case having a small downloadable file that
could be quickly transferred via the internet from a physiotherapist to
their patient, is a simple way to administer a new motion.

If more time was available, the validation drawings could have

been improved. A more secure method of fixing the pen would help,
as well as fixing the drawing paper on a stand, minimising sources of
error. The drawings could then be measured with just a ruler and
then compared to the Kinect recording. However since further
development of the Kinect to robot interface would likely use a more
advanced robotic arm, the usefulness of such measurements is
debatable. As mentioned in the discussion, reducing the backlash
within the DC motors would increase accuracy of the system. Fitting


new DC motors with a different gearbox would be a fairly easy



5.1.2 Development of Advanced Applications

With the vision of using the type of system designed in this project in
a physiotherapeutic application, there are several interesting ways to
initiate further development.

On top of the possible improvements to the process of motion

capture to replication mentioned earlier, the front panel of the







programming techniques, i.e idiot proofing could be used. Motions

that are unlikely to be able to be replicated by the robot could be







mentioned in section 3.3, where the number of coordinates passed

from the Kinect to the robot can be reduced, a physiotherapist could
fine-tune the motion they record, with the altered motion being
updated on a graph in real-time.

Another interesting concept is the idea of tele-physiotherapy.

A physiotherapist could send their patient a new motion to practise
via the internet. By taking advantage of the Kinects RGB camera
and microphone, the patient could watch their therapist performing
the motion in their office synchronised with the robot going through
the motion at the patients home. At the same time, sensors such as
potentiometers could provide the therapist with live feedback on how
well the patient is performing using the robotic arm. This feedback
could also be used in combination with another Kinect installed at the
patients home, which could provide further feedback by analysing
their movement. Resistive feedback could also be implemented. Here
the patient moves the robot through a motion as part of a
rehabilitative exercise. The motors of the robot only switch on if the
patient starts to deviate from the path of the motion.


Figure 11. Basis of a Kinect-controlled tele-physiotherapy system (Du,

Zhang, Mai, & Li, 2012)

The Kinects powerful capabilities, including of course the

ability to track the full human skeleton, have been underutilised in
this project. Physiotherapeutic exercises could be developed for not
only upper limbs, but also legs for example. As noted by Biswas &
Basu, 2011, the Kinect is capable of understanding hand gestures.
These could be used as further controls in exercises, as well as the
possibility of implementing voice control.

Investigating using a robot with additional degrees of freedom

would be of interest. The y position of a users hand from the Kinect
could also be used alongside x and z values, with a more advanced
robot. Within the engineering department in Leeds a more advanced
robotic arm designed with a physiotherapy application in mind is
currently being developed as a 4th year group project. This has a
forearm that could rotate around 360, greatly increasing the
available workspace of the robot. Its dimensions also appeared to be
closer to a human arm than the robot used in this project.


Appendix 1
Robotic Arm Wiring Diagram


Appendix 2

Firstly angle m1 is found, which is the angle from the vertical starting
position that forearm must move through to reach the correct
distance a, distance between (0,0) and the target (x,y). If x is
negative for first point, m1 is made negative (

-1) for all points in the


where L = 175 mm (length of one arm link)

Angle m2 is angle that the upper arm must move through

to reach the target point.
If m1 is positive:


m1 is negative:


Biswas, K. K., & Basu, S. K. (2011). Gesture recognition using
Microsoft Kinect . Automation, Robotics and Applications
(ICARA), 2011 5th International Conference on.
Coe, R., Hussain, N., Oladokun, A., Stokes, W., Skyes, J., & Yang,
S. (2012). Robotic Arm Summer Project. University of Leeds.
Cotter, B., Clark, D., & Norman, C. (2012a). Using a Low-Cost
Motion Capture System Based for New Surgical and
Rehabilitation Technologies. University of Leeds.
Cotter, B., Clark, D., & Norman, C. (2012b). Community: Kinesthesia
- A Kinect Based Rehabilitation and Surgical Analysis System,
UK. Retrieved January 6, 2013, from
Du, G., Zhang, P., Mai, J., & Li, Z. (2012). Markerless Kinect-based
hand tracking for robot teleoperation. International Journal of
Advanced Robotic Systems, 9. Retrieved from
Henry, P., Krainin, M., Herbst, E., Ren, X., & Fox, D. (2012). RGB-D
mapping: Using Kinect-style depth cameras for dense 3D
modeling of indoor environments. The International Journal of
Robotics Research, 31(5), 647663.
Jackson, A. E., Makower, S. G., Culmer, P. R., Holt, R. J., Cozens, J.
A., Levesley, M. C., & Bhakta, B. B. (2009). Acceptability of
robot assisted active arm exercise as part of rehabilitation after
stroke. Rehabilitation Robotics, 2009. ICORR 2009. IEEE
International Conference on. doi:10.1109/ICORR.2009.5209549
Khoshelham, K. (2011). Accuracy Analysis of Kinect Depth Data.
ISPRS Journal of Photogrammetry and Remote Sensing, 38,
135. Retrieved from
Kinect for Windows. (2012). Retrieved December 27, 2012, from


Kinect for Windows Sensor Specifications. (2012). Retrieved

December 27, 2012, from
Kwakkel, G., Kollen, B. J., & Krebs, H. I. (2008). Effects of RobotAssisted Therapy on Upper Limb Recovery After Stroke: A
Systematic Review. Neurorehabilitation and Neural Repair,
22(2), 111121. doi:10.1177/1545968307305457
Lu, E. C. (2011). Development of an Upper Limb Robotic Device for
Stroke Rehabilitation. ProQuest Dissertations and Theses.
University of Toronto (Canada), Canada. Retrieved from
Mentiplay, B., & Clark, R. (2013). Reliability and validity of the
Microsoft Kinect for evaluating static foot posture. Journal of foot
and . Retrieved from
Microsoft Online Store. (2013). Retrieved January 5, 2013, from