Anda di halaman 1dari 111

PHYSICAL SIMULATION OF IMAGE-DIRECTED RADIATION THERAPY OF LUNG TARGETS by

KAPIL SHARMA

Submitted in partial fulfillment of the requirements for the degree of Master of Science

Thesis Advisor: Wyatt Newman

Department of Electrical Engineering and Applied Physics CASE WESTERN RESERVE UNIVERSITY August 1999

Table of Contents
Chapter 1 Introduction .................................................................................................. 1 1.1 Overview of Cancer ............................................................................................ 1 1.2 Radiotherapy ....................................................................................................... 4 1.3 Stereotactic Radiosurgery and Radiotherapy ...................................................... 7 1.4 Image-Directed Radiation Therapy..................................................................... 8 1.5 IDRT of Lung Tumors ...................................................................................... 10 1.6 Goal and Organization of this Thesis................................................................ 15 Chapter 2 Tool-Frame Calibration .............................................................................. 16 2.1 Relation Between Tool-Frame and World Frame ............................................. 17 2.2 Solving for P ................................................................................................ 21 7/6 2.3 Results ............................................................................................................... 24 2.4 Conclusions ....................................................................................................... 27 Chapter 3 Robot-Camera Calibration.......................................................................... 28 3.1 Camera Models ................................................................................................. 28 3.1.1 A Distortion-Free Camera Model .............................................................. 28 3.1.2 Lens Distortion Model ............................................................................... 32 3.2 RAC-Base Camera Calibration ......................................................................... 37 3.3 Computation of 3-D coordinates from Calibrated Camera ............................... 38 3.4 Automated Robot/Camera Calibration.............................................................. 40 3.4.1 Generation of 3-D Points in World Frame for Calibration ........................ 40

iii

3.4.2 Generation of 2-D Image Coordinates ....................................................... 40 3.4.3 Calibration Computation............................................................................ 41 3.4.4 Calibration User-Interface Software .......................................................... 42 3.5 Results ............................................................................................................... 44 Chapter 4 Treatment Simulation Physical Components .......................................... 46 4.1 Phantom and Film ............................................................................................. 46 4.2 Emulation of Target Motion.............................................................................. 48 4.3 Proxy for Target Location ................................................................................. 51 Chapter 5 Treatment Simulation: Software Components ........................................... 53 5.1 Hardware Platform ........................................................................................... 53 5.2 Real-Time Computation of Target Coordinates................................................ 54 5.3 Graphical Interface............................................................................................ 57 5.3.1 Display ....................................................................................................... 57 5.3.2 Controls ...................................................................................................... 58 5.3.3 Menu........................................................................................................... 60 5.4 Beam Control .................................................................................................... 60 5.5 Node Generation ............................................................................................... 61 5.6 Summary of Treatment Simulation Protocol .................................................... 64 5.7 Beam Size Selection.......................................................................................... 65 Chapter 6. Results and Conclusions............................................................................ 72 6.1 Results ............................................................................................................... 73

iv

6.2 Conclusions ....................................................................................................... 88 6.3 Future Work ...................................................................................................... 90 Appendix 1 .................................................................................................................. 93 Bibliography................................................................................................................ 98

List of Tables
Table 2.1: Identification of P7 / 6 in the Presence of Noise........................................... 25 Table 2.2: Computed Coordinates at a Test Reference Point 1 From 3 Approaches Using Identified P7 / 6 :.................................................................................................... 26 Table 2.3: Computed Coordinates at a Second Test Reference Point From 3 Approaches Using the Same Identified P :.......................................................................... 26 7/6 Table 3.1: Accuracy Test 1 ......................................................................................... 45 Table 3.2: Accuracy Test 2 ......................................................................................... 45 Table 5.1: Statistical Results for First Set of Values for Tumor Size and Distance Threshold............................................................................................................. 67 Table 5.2: Statistical Results for Second Set of Values for Tumor Size and Distance Threshold............................................................................................................. 67 Table 5.3: Resulting Values for Non-Target Coverage and Beam Time Utilization with Change in Distance Threshold ............................................................................ 70

vi

List of Figures
Figure 1.1: Probability of Tumor Tissue and Normal Tissue Morbidity versus Dose. 5 Figure 1.2: Linear Accelerator ...................................................................................... 6 Figure 1.3: Cleveland Clinic Caner Center Cyberknife Treatment System................ 10 Figure 1.4: Male Cancer Risks.................................................................................... 11 Figure 1.5: female Cancer Risks ................................................................................. 11 Figure 1.6: Translational Motion of Lung Tumor during Respiration........................ 12 Figure 2.1: Tool Used for Calibration......................................................................... 18 Figure 2.2: Coordinate Frames Defined on the Robot ................................................ 19 Figure 2.3: Robots Tool Tip Touches a Reference Point ........................................... 22 Figure 3.1: Camera Coordinate System Assignment .................................................. 30 Figure 3.2 Effects of Radial Distortion ....................................................................... 34 Figure 3.3: Effects of Tangential Distortion ............................................................... 34 Figure 3.4 Communication between Different Hardware Components...................... 42 Figure 3.5: Calibration Software Interface.................................................................. 43 Figure 4.1: X-Y Table and Phantom under Treatment Beam Source ......................... 47 Figure 4.2: Parabolic Velocity Curve for PVT Moves................................................ 49 Figure 4.3: Describing a Contour in Segments of PVT Moves .................................. 50 Figure 4.4: Generated Trajectory, Position vs. Time .................................................. 51 Figure 4.5: Surface-Mounted LED Used as Proxy ................................................... 52 Figure 5.1: Treatment Software Interface ................................................................... 57 Figure 5.2: Node Generation Software Interface ........................................................ 63

vii

Figure 5.3: Block Diagram Description of Treatment Simulation.............................. 65 Figure 5.4 Coverage Area Histogram ......................................................................... 68 Figure 6.1: Exposed Film with No Gating .................................................................. 73 Figure 6.2a: Exposed Film for Manual Gating, 1-D Target Motion ........................... 74 Figure 6.2b: Isodose Lines, Manual Gating, 1-D Motion .......................................... 75 Figure 6.2c: Dose Area Histogram, Manual Gating, 1-D Motion .............................. 75 Figure 6.3a: Exposed Film for Automated Gating, 1-D Motion................................ 76 Figure 6.3b: Isodose Lines, Automated Gating, 1-D Motion..................................... 76 Figure 6.3c: Dose Area Histogram, Automated Gating, 1-D Motion ......................... 77 Figure 6.4a: Exposed Film for Manual Gating, 2-D Motion ..................................... 78 Figure 6.4b: Isodose Lines, Manual Gating, 2-D Motion .......................................... 78 Figure 6.4c: Dose Area Histogram, Manual Gating, 2-D Motion............................... 79 Figure 6.5a: Exposed Film for Automated Gating, 2-D Motion................................ 79 Figure 6.5b Isodose Lines, Automated Gating, 2-D Motion...................................... 80 Figure 6.5c Dose Area Histogram, Automated Gating, 2-D Motion .......................... 80 Figure 6.6a: 9 Stacked Films for 3-D Tumor, 2-D motion, Automated Gating from 5 Beam Approaches ............................................................................................... 82 Figure 6.6b: Isodose Lines for 9 stacked Films (1to 9), 2-D Motion, Automated Gating from 5 Beam Approaches.................................................................................... 84 Figure 6.6c Dose Volume Histogram for Stacked Films, ), 2-D Motion, Automated Gating from 5 Beam Approaches........................................................................ 85

viii

Figure 6.7a: Exposed Film for Traditional Non-Gated Treatment, 2-D Motion and 1 Approach Direction ............................................................................................. 86 Figure 6.7b: Isodose Lines for Traditional Treatment Example ................................. 87 Figure 6.7c: Dose Area Histogram for the Traditional Treatment Example.............. 87

ix

Acknowledgments
This work was supported and motivated by the Cleveland Clinic Foundation, Department of Radiation Oncology. The above support is gratefully acknowledged. I would like to thank my advisor, Dr. Wyatt Newman, for his ideas and technical guidance. I would like to thank the rest of my committee: Dr.Martin Weinhous and Dr. Michael Branicky. I also appreciate those who helped me at the Clinic, specifically Dr. Roger Macklis, Mr. Greg Glosser, Dr. Ray Rodebaugh and Dr. Qin Sheng Chen. I would like to extend my deepest appreciation to my family: without their love and support, none of this would have been possible.

Physical Simulation of Image-Directed Radiation Therapy of Lung Targets Abstract By KAPIL SHARMA

Traditional radiation therapy systems operate in an open-loop fashion with no real-time feedback on patient or target position. They are often constrained by the volume of normal tissue that must be irradiated when treating a moving target such as a lung tumor (moving with respiration). In this study, a novel means of cancer treatment image-directed radiation therapy (IDRT) has been explored experimentally. This treatment method offers the potential for more highly targeted radiation dose delivery to tumors, reducing the collateral damage to surrounding, healthy tissue. It is shown that smaller, more conformal fields, irradiating only when the target is within the portal (known as gating), can provide an increased therapeutic ratio.

xi

xii

1. INTRODUCTION

At the Cleveland Clinic, a novel means of cancer treatment image-directed radiation therapy (IDRT) is being explored experimentally. This treatment means offers the potential for more highly targeted radiation dose delivery to tumors, reducing the collateral damage to surrounding, healthy tissue. This thesis presents the motivation for IDRT, identification of the challenges in accomplishing IDRT, and simulated and experimental results for evaluating the potential benefits of IDRT.

1.1 Overview of Cancer


Cancer is a group of diseases characterized by uncontrolled growth and spread of abnormal cells. If the process gets out of control, the cells will continue to divide, developing into a mass called a tumor. If a tumor is left untreated, it may invade and destroy surrounding tissue leading to formation of new tumors in new locations, often referred to as metastasis. The National Cancer Institute estimates that approximately 8.2 million Americans alive today have a history of cancer [1]. About 1,221,800 new cancer cases are expected to be diagnosed in 1999 [1]. Since 1990, approximately 12 million new cancer cases have been diagnosed. Lifetime risk refers to the probability that an individual, over the course of a lifetime, will develop cancer or die from it. In the US, men have a 1 in 2 lifetime risk of developing cancer, and for women the risk is 1 in 3.

Treatment choices for a person with cancer depend on the type and stage of the tumor, that is, if it has spread and how far. Treatment options may include surgery, radiation, chemotherapy, hormone therapy, and immunotherapy. Often several forms of treatment are combined to increase the efficacy. For example, surgery can be followed by chemotherapy or radiation therapy to ensure the elimination of cancerous cells. It requires experience to determine the appropriate form of treatment from different choices. Surgery is the oldest form of treatment for cancer and remains one of the most important treatment components for solid tumors. Before the discovery of anesthesia and antisepsis (methods such as sterilization of instruments to prevent infection), surgery was performed with great discomfort and risk to the patient. Today surgery offers the greatest chance for cure for many types of cancer. About 60% of people with cancer will have some type of surgery [2]. The aim of surgery is to remove malignant growth as completely and rapidly as possible. Surgery alone can be curative in patients with localized disease, but because many patients (~70 %) have evidence of micro-metastases at diagnosis, combining surgery with other treatment modalities is usually necessary to achieve higher response rates [2]. Also, reducing the tumor mass in certain cancers can increase the effectiveness of subsequent radiation therapy or chemotherapy, both of which are most effective against small numbers of cancer cells. Chemotherapy is one of the most recent cancer treatment methodologies. Chemotherapy is the use of medicines (drugs) to treat cancer. Systemic

chemotherapy uses anticancer (cytotoxic) drugs that are usually given intravenously or orally. These drugs enter the bloodstream and reach all areas of the body, making this treatment potentially useful for cancer that has spread. It can include one drug or several drugs, taken from a choice of different available drugs. Chemotherapy drugs work by interfering with the ability of a cancer cell to divide and reproduce itself. The affected cells become damaged and eventually die. As the drugs are carried in the blood, they can reach cancer cells all over the body. Unfortunately, chemotherapy drugs can also affect normal cells, sometimes causing unpleasant to toxic side effects. Chemotherapy is particularly valuable as the primary form of treatment for cancers that do not form a shape, like leukemia and lymphoma. Radiation therapy is one of the major treatment modalities for cancer. Approximately 60% of all people with cancer will be treated with radiation therapy sometime during the course of their disease [2]. With advances in radiobiology and equipment technology, radiation therapy can now be delivered with maximum therapeutic benefits, minimizing toxicity and sparing healthy tissues. In addition to its therapeutic benefits, radiotherapy is a non-invasive or minimally invasive procedure. Radiotherapy, or radiation therapy, is the treatment of cancer and other diseases with ionizing radiation. Ionizing radiation deposits energy that injures or destroys cells in the area being treated (the target tissue) by damaging their DNA structure, making it impossible for these cells to continue to grow (mitotic death). Although normal cells can also be affected by ionizing radiation, they are usually

better able to repair their DNA damage. Radiation therapy may be used to treat localized solid tumors, such as cancers of the skin, brain, breast and lung. It can also be used to treat leukemia and lymphoma.

1.2 Radiotherapy
A novel approach to radiation therapy, image-directed radiation therapy is the focus of this thesis. Soon after discovery of X-rays by Roentgen in 1895, radiations dramatic effects on normal tissues were discovered [3]. The higher the energy of the X-rays, the deeper the X-rays can penetrate into target tissue. Linear accelerators are machines that produce X-rays of increasingly greater energy. The use of these machines to focus radiation (such as X-rays) on a cancer site is called external beam radiotherapy. Gamma rays are the another form of photons used in radiotherapy. Gamma rays are produced spontaneously as certain elements (such as radium, uranium and cobalt 60) release radiation as they decay. X-rays and gamma rays have the same effect on cancer cells. Another technique for delivering radiation to cancer cells is to place radioactive implants directly on or in a tumor or body cavity. This is called internal radiotherapy. Brachytherapy, interstitial irradiation, and intracavitary irradiation are the types of internal radiotherapy [2]. In this treatment, the radiation dose is concentrated in a small area, and the patient usually stays in the hospital for few days.

Internal radiotherapy is frequently used for cancers of the tongue, uterus, cervix, prostate and others. An investigational approach is particle beam radiation therapy, in which fast moving subatomic particles (like neutrons, pions and heavy ions) are used instead of photons.

Figure 1.1: Probability of Tumor Tissue and Normal Tissue Morbidity versus Dose (reprinted from [4])

Radiations effect on individual cells is a probabilistic process [4]. However, the effects of radiation on a large set of cells are more deterministic. As shown in figure 1.1, there is a minimum dose threshold to achieve a clinical effect and maximum dose above which all cells will demonstrate the effect. The primary aim of radiotherapy is to deliver a high dose to maximize the probability of tumor control

with risk to normal tissue below the intolerable level. In certain areas, the radiosensitivity of surrounding normal tissue becomes the dominant factor (e.g optic chiasm in brain tumor, spine in lung tumors), thus limiting the maximum amount of dose that can be delivered. Some tissues, such as in the lung, have a low dose threshold for permanent radiation effects. Doses as low as 25 Gray (joule/kg) can lead to permanent damage, resulting in the loss of lung functionality.

Figure 1.2: Linear Accelerator

In traditional radiotherapy a medical linear accelerator (figure 1.2) is used to deliver a dose to target tissue from one or more angles, typically 2-4 angles. Fractionation (dividing the treatment over time into multiple smaller doses or fractions of radiation) is used to improve the radiation effect on the tumor while minimizing the effect on normal cells. The rationale behind fractionation is that

normal tissue tolerates small, daily doses of radiation relatively well. The tumor does not tolerate the small, daily doses, resulting in control of the tumor.

1.3 Stereotactic Radiotherapy and Radiosurgery


Stereotactic technology has been applied to neurosurgery since the early nineties [5,6]. Recently, it has been applied to radiation treatment of tumors, particularly brain tumors [7,8] Stereotactic radiotherapy involves varying the angle of a radiation treatment beam in 3-d together with varying beam intensities to achieve very precise delivery of radiation to target tissue. Radiation beams are aimed at a focal point. The dose distributions achieved by these techniques assure large doses to the target volume and much lower doses to the surrounding normal tissues. Most of the time spent during the procedure is in precisely planning the delivery of radiation beams to focus on the tumor and minimize damage to surrounding, normal tissue. This is known as conformal treatment planning. Stereotactic radiotherapy is primarily used for treatment of brain tumors. A head frame is attached to the patients skull; with the assistance of a CT or MRI scanner providing a three-dimensional image, the frame helps pinpoint the tumor location without opening the skull. Further, stereotactic radiosurgery is typically given as single treatment (single fraction) whereas stereotactic radiotherapy is given as a course of treatments (multiple fractions). The Cleveland Clinic has four kinds of external beam treatment systems; standard medical linear accelerators, the Leksell Gamma Knife [9], a Peacock intensity modulated

radiation therapy system [10], and a Cyberknife image-directed therapy system [11 12]. The Gamma Knife provides non-fractionated stereotactic radiosurgery. The others are capable of both stereotactic radiosurgery and fractionated stereotactic radiotherapy. The Gamma Knife functions by delivering beams from 201 Cobalt-60 sources to a focal point. The standard medical accelerators deliver radiation using beam arcs. The Peacock uses a fan beam with intensity modulated X-rays within the fan to achieve a conformal dose distribution. The Cyberknife delivers radiation from a miniature accelerator mounted on a robotic manipulator under real-time image-directed, computer control to provide a confromal dose distrbution.

1.4 Image-Directed Radiation Therapy (IDRT)


Interactive image-guided surgery has been used in the field of neurosurgery [13]. But its use in the field of radiation treatment is very new [12,14] Conventional stereotactic radiation therapy involves use of a frame rigidly attached to the patients skull to provide a reference for both targeting and treatment. The idea is that after positioning the patient with the help of a frame, if a beam is constrained to pass through a particular point in the frame coordinate system, it will also pass through the intended target within a patient. But this assumes that the patient does not move after alignment is done. It is an open-loop treatment system in the sense that once the alignment is done, there is no adjustment for subsequent motion of patient or tumor. This assumption is reasonable for targets within the skull when a frame is bolted to the skull and also rigidly fixtured to ground. Image-directed radiation therapy uses

real-time images of the target (or fiducial markers in or around the target) in place of the frame to alter the aim of the radiation source so that the intended target is always in the beams path, hence providing a closed-loop system. Currently the Cyberknife (see figure 1.3) is the only radiation treatment system using this technology. It uses a pair of orthogonal ceiling-mounted diagnostic quality x-ray sources to provide near real-time feedback of patient position. The treatment source is a miniature X-band linear accelerator manipulated by a six degree-of-freedom Fanuc robot. The system has a set of predefined treatment nodes or directions, from which a portion of a treatment can be delivered. Selection of particular nodes and the dose delivered from each node is done by computerized treatment planning. During treatment, the robot sequentially moves the accelerator to each of the selected nodes, it waits while the real-time diagnostic imager acquires a pair of target/anatomy images, and it compares and registers the diagnostic images with reconstructed synthetic images from previously-acquired CT data. This comparison enables the system to see if any patient motion has occurred; if so, the robot moves the accelerator to correct for that motion. As long as the patient motion is less than 1 centimeter, the system will automatically correct for the motion.

10

Figure 1.3: Cleveland Clinic Cancer Center Cyberknife Treatment System

1.5 IDRT of Lung Tumors


Lung cancer is the most common cancer-related cause of death among men and women. It is the most commonly occurring cancer (figures 1.4 and 1.5) among men and women. There will be estimated 171,600 new lung tumor cases in 1999 [15], accounting for 14% of cancer diagnoses. An estimated 158,900 deaths due to lung cancer will occur in 1999, accounting for 28% of all cancer deaths [15].

11

Figure 1.4: Male Cancer Risks [15]

Figure 1.5: Female Cancer Risks [15]

One of the difficulties of radiation treatment of lung tumors is that, of all the tumors, lung tumors demonstrate the greatest motion and deformation due to both breathing and heartbeat (figure 1.6). During treatment, however, there is no adjustment for this motion in real time. Instead a wider treatment beam is used to conservatively guarantee that the target remains inside the beam [17].

12

Figure 1.6: Translational Motion of Lung Tumor during Respiration [16]

Tumor identification is done using Computerized Tomography. Physicians draw outlines of tumors and critical structures using these images. Using a prescribed minimum dose to the tumor and maximal dose to critical structures, a dosemetrist uses a computer treatment planning system to calculate the optimal treatment. At present, the area of the beam is made larger than the tumor area to ensure coverage of all cancerous tissue and to account for motion. This margin is usually ~2cm [17]. Finally, the length of time that the beam is on is at least several seconds, which is longer than the breathing cycle. Conventional treatment planning and delivery cannot fully account for the fundamental inaccuracy of using static images and no feedback to treat a moving tumor. This provides a motivation for the use of Image-Directed Radiation Therapy to provide a closed-loop treatment system adjusting the beam with tumor motion.

13

The Cyberknife currently is being used for treatment of tumor sites within the skull and near the spinal cord. The imaging system in Cyberknife keys on rigid skull features to perform image correlation with CT images. One of the greatest

advantages of the Cyberknife system is that it has six degrees of freedom. This flexibility allows the system to be used for the treatment of extracranial tumor sites. But the system, in its present form, cannot be used for treatment of extracranial tumors - specifically lung tumors - due to the following constraints.

1. The image quality of the X-ray images is poor, so it can only use rigid structures or bones to perform image correlation. In the case of lung tumors this is particularly problemetic as the number of obstructions and occlusions in the torso makes automatic detection of tumors nearly impossible in real-time. 2. Assuming tumors could be identified within the images, the current image correlation will take around 6 seconds, which would render the system useless for treatment of moving lung tumors. A typical tumor will have a motion period of 1-3 seconds during which it can move anywhere from 0-2 centimeters. 3. Cyberknife is a point-and-shoot system. It is not designed to track tumors.

4. In addition to the technical challenges of adapting the system for treatment of other tumor sites, there are legal and regulatory challenges. The Food and Drug Administration (FDA) must approve all experimental devices and treatment. While the Cyberknife is presently approved under an Investigational Device Exemption for treatment of intacranial tumors, treatment outside the skull requires additional FDA

14

approval. These constraints can be overcome to a certain extent with the use of image proxies and human interaction. A proxy is an indirect, external, visible marker, which can be used to infer the position of a tumor inside the body. The proxy position can be determined with the help of calibrated video cameras. If a coordinate transform between a proxy and a tumor is known, the position of the tumor can be computed from the proxy location. Use of a proxy can thus avoid dependence on unreliable and poor quality diagnostic imaging for computing the 3-dimensional tumor positions. Given a reliable transform between a proxy (or proxies) and a target tumor, it would be possible to identify tumor coordinates reliably and accurately using conventional video cameras. Assuming fast, accurate, and reliable identification of tumor coordinates, one could exploit control over treatment beam power of aim to achieve more precise radiation dose delivery. In this scenario, a physician would see a real-time display of tumor and beam coordinates on a screen and could gate or track the beam using a mouse or keypad or joystick. Here, gating means turning the beam on whenever the tumor is in position, as opposed to tracking, which means following the target with the beam turned on. Previous work done in computer simulation has shown that using real-time feedback of images, a trained physician can treat a tumor with increased dose while reducing the dose to healthy tissue [18].

15

1.6 Goal and Organization of this Thesis


The purpose of this study was to evaluate the feasibility of image-directed gated treatment of lung tumors using the Cyberknife. A treatment environment was simulated using both hardware as well as software. The experimental testbed consisted of the following main components: The Cyberknife system with a robotically manipulated liner accelerator. Experimental target and means for measuring the results. Generation of motion emulating the trajectory of a lung tumor due to respiration. Choice of a proxy to imply the position of a tumor inside the phantom. A calibrated video camera. Means to compute real-time 3-D tumor coordinates from video images of moving

proxies. Real-time graphical display of computed tumor and beam coordinates. Means to manually or automatically modulate (gate) the radiation beam.

This thesis is organized as follows: Calibration of the robots tool-frame with respect to the robots base frame is discussed in chapter 2. Chapter 3 discusses the camera calibration technique employed. Chapter 4 describes the physical components of the experimental testbed. Chapter 5 describes the software components of the testbed. Finally the results and conclusions are presented in chapter 6.

16

2. TOOL-FRAME CALIBRATION

Success of image-directed radiation therapy depends critically on accurate calibration between computed beam coordinates and computed target coordinates. Achieving this calibration requires identification of multiple coordinate transformations. Coordinate transforms include: robot joint angles to tool-flange position and orientation (with respect to the robot base frame coordinates); tool frame (e.g. radiation beam) coordinates to robot tool-flange coordinates; camera-frame coordinates to robot base-frame coordinates; and proxy coordinates to target coordinates. Identification of the first coordinate transform, i.e. robot joint angles to tool-flange position and orientation, already has been done by the robots manufacturer. Identification of all other transformations was a part of this thesis. In this respect, the first step was identification of the tool frame to tool-flange coordinate transformation. To reconcile treatment-beam coordinates with camera coordinates, an intermediate step was used, involving a tool which was easy to align with the beam and easily recognized by the camera. The tool was a modified calibration pointer, which fit precisely within a mount aligned collinear with the beam axis. The pointer was retrofit with a light-emitting diode (LED) at its tip, which was easily recognized in camera scenes by simple thresholding. The mounted tool is shown in figure 2.1. Calibration was performed in two steps. First, the tool frame transform (from robot flange coordinates to pointer tip) was identified using a fixed reference point, then the

17

camera was calibrated using the tool. This chapter describes the tool-frame calibration, chapter 3 presents the camera calibration.

2.1 Relation between Tool Frame and World Frame


The Cyberknife robot has a default tool frame defined on its tool flange. Whenever the robot is jogged in space, the 3-D coordinates corresponding to the robots forward kinematics from base frame to tool-flange frame are computed and displayed. Figure 2.2 shows the world frame, the default tool-flange frame and the new tool frame defined parallel to the tool-flange frame.

18

Figure 2.1: Tool Used for Calibration

19

Figure 2.2: Coordinate Frames Defined on the Robot

In figure 2.2, subscript 0 refers to the world frame coordinates, subscript 6 refers to the default tool-flange coordinate frame, and subscript 7 refers to the defined tool-frame at the pointer tool tip. We can express the following relation among the different frames [19]: P7 / 0 = P6 / 0 + R6 / 0 P7 / 6 where P7 / 0 is the position of the origin (LED center) of tool frame 7 with respect to the world frame 0, P6 / 0 is the position of the origin of the default tool-flange frame 6 with respect to the world frame 0, P7 / 6 is the position of the origin of the tool frame 7 2.1

20

with respect to default tool-frame 6 and R6 / 0 is the rotation matrix of default tool frame 6 with respect to world frame 0. Let w be the yaw angle, which is the angle of rotation between frame 6 and frame 0 about the x axis, p be the pitch angle, which is the corresponding angle about the y axis, and r be the roll angle, which is the corresponding angle about the z axis. Then the rotation matrix R6 / 0 can be written as:
R6 / 0 = R6z / 0 R6y/ 0 R6x/ 0

where superscripts x, y, z represent the rotation matrix for yaw, pitch and roll respectively. Notice that the order of rotation is yaw, pitch and then roll. The order of rotation is important because matrix multiplication is not commutative. Also note that the defined tool frame is parallel to the default tool frame. The w,p,r rotations for the defined tool frame are the same as for the default tool frame. The matrices for yaw, pitch and roll can be written as [19]: 0 0 1 0 cos( w) sin( w) = 0 sin( w) cos w cos( p ) 0 sin( p ) = 0 1 0 sin( p ) 0 cos( p ) cos(r ) sin( r ) 0 = sin(r ) cos(r ) 0 0 0 1

x 6/0

y 6/0

R6z / 0

Rearranging equation 2.1 we have:

21

IP7 / 0 R6 / 0 P7 / 6 = P6 / 0

2.2

1 0 0 Here I is the 3x3 identity matrix 0 1 0 0 0 1


Equivalently we can write equation 2.2 as

[I
which is of the form

P7 / 0 M R6 / 0 ]36 = [P6 / 0 ]31 P7 / 6 61

2.3

A36 X 61 = B31 Here we have 6 unknowns given by vector X and 3 equations, which can not solved to obtain a unique solution. We need at least three more equations to obtain a solution, which we obtain as follows.

2.2 Solving for P7 / 6


A reference point is used for generating more equations to solve for the unknowns. The tool tip, i.e. the LED, is touched to the reference point from different directions (see figure 2.3).

22

Figure 2.3: Robots Tool Tip Touches a Reference Point

Since the reference point is unchanged, we have constant P6 / 0 . If n denotes the number of different directions from which the tool tip touches the reference point, we have the following equation: I I . . I
1 P61/ 0 R6 / 0 2 R62/ 0 P6 / 0 P7 / 0 . = . P7 / 6 61 . . n P n R6 / 0 3n6 6 / 0 3n1

2.4

where superscript 1,2,,n corresponds to the approach angle each of the n measurements.

23

i Note that P6i/ 0 and R6 / 0 are known for each case i from the robot controllers

display of forward kinematics to the tool-flange. For n=2 we have 6 unknowns and 6 simulations equations, which can be easily solved to compute the solution. For n>2 we can compute the least squares solution using the following method. Equation 2.4 is equivalent to the following form: A3n6 X 61 = B3n1 I I where, A = . . I Computing the pseudo inverse as: A + = ( AT A) 1 A the least squares solution follows as: ) X = A+ B Further, we can also compute the least square error as:
Error = 1 3n ) ) [AX B] [AX B]
T

2.5

1 P61/ 0 R6 / 0 2 R62/ 0 P6 / 0 P7 / 0 ,B = . . , X = P7 / 6 . . P n R6n/ 0 6 / 0 3n1

2.6

2.7

2.8

The following are the steps used for toolframe calibration: Use the default tool frame as the robots tool-frame.

24

Jog the robot and touch the tool tip to a reference point from multiple different directions.

Record the roll angle, yaw angle, pitch angle and tool position P6 / 0 for each such pose.

Compute the solution for the tool-frame as per equations 2.6 and 2.7, solving for P7 / 6 and P7 / 0 .

2.3 Results
To test the solution, first a set of synthetic data was generated. The data included the values of P7 / 0 , P6 / 0 , R6 / 0 and P7 / 6 which solved equation 2.1. In the first experiment, no error value was introduced, allowing for a perfect solution. In subsequent analysis, uniform random noise of 1 mm, 2mm, 4mm, 6mm and 8mm peak value was added to the values of P6 / 0 and R6 / 0 . Fifteen different sets of P6 / 0 and R6 / 0 were generated. Equations were solved by the method described in section 2.2, and resulting values for P7 / 0 and P7 / 6 were recorded. The results obtained are summarized in table 2.1:

25

Synthetic random error peak value

Computed X (in mm)

Computed Z(in mm) 0.529 0.855 1.505 2.15 2.80758

Computed Y(in mm) 109.254 108.962 108.378 107.795 107.211

Calculated error(mm)

1mm 2mm 4mm 6mm 8mm

-830.309 -830.136 -829.790 -829.443 -829.0976

0.499 0.999 1.998 2.998 3.99

Table 2.1: Identification of P7 / 6 in the Presence of Noise. Actual P7 / 6 = {830.48,0.204,109.54}

The X,Y and Z coordinates in table 3.1 are the coordinates of P7 / 6 and the error is computed by equation 2.8. For the purpose of tool-frame computation, the robot touched the reference point from 15 different directions. The resulting error calculated by equation 2.6 was 2.1 mm. To test the accuracy of the tool-frame coordinate identification, the tool frame used by the robots controller was changed to the computed tool-frame, and the LED tip was touched to the reference point from different directions. The location of the reference point was different from the location used for calibration of the tool frame. The values for X,Y and Z world coordinates were recorded from the robots

26

teach pendant display. Tables 2.2 and 2.3 summarize the results for two different test point locations.

X (in mm) 2185.00 2182.093 2179.344

Y (in mm) 654.654 655.068 654.582

Z (in mm) 89.417 89.337 88.598

Euclidean distance from centroid (in mm) 3.01 0.36 2.8

Tables 2.2: Computed Coordinates at a Test Reference Point 1 from 3 Approaches Using Identified P7 / 6

X (in mm) 2155.488 2157.594 2160.84

Y (in mm) 536.00 538.065 538.67

Z (in mm) 133.207 133.716 134.715

Euclidean distance from centroid (in mm) 3.06 0.64 3.37

Tables 2.3: Computed Coordinates at a Second Test Reference Point from 3 Approaches Using the Same Identified P7 / 6

27

2.4 Conclusion
Use of the identified coordinate transform in the robot kinematic computations resulted in positioning errors in excess of 3mm. For treatment, beam positioning accuracy should be better than 2mm. However such precision can not be obtained through improved tool-frame identification. The source of the error can be the robot mastering (joint-angle calibration), transmission wind-up or backlash, gravity droop, or other effects not included in a rigid-link kinematic model. Section 5.5 will discuss a method to further improve the precision using addition of pre-computed offsets for each required pose.

28

3. ROBOT-CAMERA CALIBRATION

The most important step in our treatment testbed is obtaining the 3dimensional coordinates of a proxy, which can be later used to compute the 3dimensional location of a tumor. A video camera is used to obtain the positional information of a proxy in the robots base frame. The first and foremost requirement in this process is robot-camera calibration. Robot-camera calibration means obtaining the transformation parameters between a cameras image frame and a robots base frame. We first discuss different camera models, then present our calibration procedure, and conclude with our calibration results.

3.1 Camera Models


3.1.1 A Distortion-Free Camera Model The purpose of a model is to relate the coordinates of a point in a cameras image frame to the coordinates of the corresponding point in space, expressed in a reference coordinate system. Let { X w , Yw , Z w , Ow } denote the world coordinate system centered on the world frame origin Ow , { X c , Yc , Z c , Oc } denote the camera coordinate system, whose origin is at the optical center point Oc , and whose axis coincides with the optical axis; and let { X i , Yi , Oi } denote the image coordinate system centered at Oi (at the intersection of the optical axis Z c and the image plane as illustrated in figure 3.1). The image frame axes { X i , Yi } lie on a plane parallel to the

29

X c and Yc axes. Let ( x w , y w , z w ), ( xc , y c , z c ) and ( xi , y i ) be the coordinates of a point in world, camera and image frames respectively. The transformation of the point P from the world coordinates p w to the camera coordinates p c is given by:

px px p = R p + o c/w y c/w y pz / c pz / w
or, for simplicity of notation,

xc y = c zc

xw R yw + t zw

3.1

Where the rotation matrix R and translation vector t are written as:

r1 R = r4 r7
and

r2 r5 r8

r3 r6 r9

t x t = t y t z

30

Figure 3.1: Camera Coordinate System Assignment

We invoke the standard distortion-free pin-hole model assumption that every real object point is connected to its corresponding image point through a straight line that passes through the focal point of the camera lens [23]. The following perspective equations result, relating coordinates of point p expressed in the camera frame to coordinates in the image plane:
u= f v= f x z y z

3.2

3.3

In the above, f is the (effective) focal length of the camera and (u , v) are the analog coordinates of the object point in the image plane. The image coordinates ( xi , y i ) are related to (u , v) by the following equations,

31

xi = s u u yi = sv v The scale factors, su and s v , not only account for TV scanning and timing

3.4 3.5

effects, but also perform units conversion from camera coordinates (u , v) , the units of which are meters, to the image coordinates ( xi , y i ) measured in pixels. The camera calibration parameters are divided into extrinsic parameters (the elements of R and t ), which convey information about the camera position and orientation with respect to the world coordinate system, and the intrinsic parameters (such as su , s v , f and distortion coefficients that will be discussed later), which convey the internal information about the camera components and about the interface of the camera to the vision system (frame grabber). Since there are only two independent parameters in the set of intrinsic parameters su , s v and f , it is convenient to define: f x = fs u
f y = fs v

3.6 3.7

Combining the above equations with equation 3.1 yields the undistorted camera model that relates coordinates in the world frame { X w , Yw , Z w } to the image coordinate system { X i , Yi }

xi = f x

r1 x w + r2 y w + r3 z w + t x r7 x w + r8 y w + r9 z w + t z

3.8

32

yi = f y

r4 x w + r5 y w + r6 z w + t y r7 x w + r8 y w + r9 z w + t z

3.9

Note that the image (pixel) coordinates stored in the computer memory of the vision system are generally not equal to the image coordinates ( xi , y i ) computed by equations 3.8 and 3.9. Let ( x f , y f ) be the image (pixel) coordinates stored in computers memory for an arbitrary point, and let (c x , c y ) be the computed image coordinates for the center Oi in the image plane. ( xi , y i ) is then related to
( x f , y f ) by the relation

xi = x f c x yi = y f c y
The ideal values of c x and c y are the center of the pixel array. But in reality there is usually uncertainty of about 10-20 pixels [25, 26].

3.1.2 Lens Distortion Model Actual cameras and lenses include a variety of aberrations and thus do not obey the above ideal model. The main sources of error are: a) Image spatial resolution defined by spatial digitization is relatively low. e.g 512x480 b) Lenses introduce distortion. c) Camera assembly involves a considerable amount of internal misalignment. e.g. the center of the CCD sensing array may not be coincident with the optical principal point (the intersection of the optical axis with the image plane).

33

d) Hardware timing introduces mismatches between the image acquisition hardware and the camera scanning hardware.

As a result of several types of imperfections in the design and assembly of lenses, the distortion-free pinhole model may not be sufficiently. Accuracy can be improved by models that take into account positional errors due to distortion: u = u + Du (u , v) v = v + Dv (u , v) 3.10 3.11

where, u and v are the unobservable distortion-free image coordinates, and u and v are the corresponding coordinates taking distortion into account.

34

Fig 3.2 Effects of Radial Distortion [22]

Fig 3.3: Effects of Tangential Distortion [22]

35

Two types of lens distortion are radial and tangential distortions, as shown in figure 3.2 and figure 3.3. Radial distortion causes an inward or outward displacement of a given image point from its ideal location. This type of distortion is mainly caused by flawed radial curvature of lens elements. Camera calibration researchers argued and experimentally verified that radial distortion is the dominant distortion effect [24]. We can approximate the radial component of distortion as:
Du (u , v) = ku (u 2 + v 2 ) + O[(u , v) 5 ] Dv (u , v) = kv(u 2 + v 2 ) + O[(u , v) 5 ]

3.12 3.13

The higher-order terms can for all practical purposes be dropped. Substituting the above into equations 3.10 and 3.11 yields u = u (1 + k r 2 ) v = v(1 + k r 2 ) where
r 2 = u 2 + v 2

Because the undistorted image coordinates u and v are unknown, it is desirable to replace these by measurable image coordinates of x and y. Thus,
r 2 = ( xi su ) 2 + ( y i sv ) 2

Define the radial distortion coefficient k , as k k / s v , and the ratio of scale factors
2

, as:

fy fx

sv su

3.14

36

Further, define

r 2 2 xi + y i
2

3.15

With the above substitutions, one obtains the following camera model that takes into account small radial-distortion effects:

xi (1 + kr 2 ) = f x

r1 x w + r2 y w + r3 z w + t x r7 x w + r8 y w + r9 z w + t z

3.16

yi (1 + kr 2 ) = f y

r4 x w + r5 y w + r6 z w + t y r7 x w + r8 y w + r9 z w + t z

3.17

Under the approximation that kr 2 << 1 , we have,

xi r x +r y +r z +t fx 1 w 2 w 3 w x 2 r7 x w + r8 y w + r9 z w + t z 1 kr
r4 x w + r5 y w + r6 z w + t y yi fy r7 x w + r8 y w + r9 z w + t z 1 kr 2 Note that among the parameters f x , f y and , only two are independent. The parameters to be calibrated in this case are the extrinsic parameters of
R and t , and the intrinsic parameters f x , f y and k . Whenever all the distortion

3.18

3.19

effects other than radial lens distortion are zero, a Radial Alignment Constraint (RAC) equation is maintained:

xi r x +r y +r z +t = 1 1 w 2 w 3 w x yi r4 x w + r5 y w + r6 z w + t z

3.20

37

3.2 RAC-Based Camera Calibration


The camera calibration problem is to identify the set of extrinsic parameters (camera location and orientation in world coordinates) and intrinsic parameters (such as focal length, scale factors, distortion coefficients, etc.) of the camera using a set of points known both in world coordinates and image coordinates. The camera calibration methods can be divided into two categories: iterative and non-iterative. The non-iterative methods provide a closed form solution for the calibration parameters, and hence are faster [20, 21]. But they have a fundamental inaccuracy present due to neglecting the lens distortion effect. The iterative methods, which take lens distortion into account, are done usually in two steps involving iterative as well as non-iterative approaches [23, 24 and 27]. In this project we used an iterative calibration method known as the radial alignment constraint (RAC)-based camera calibration method as proposed by Tsai [23, 24]. The mathematical details of the calibration procedure are described in appendix 1. It is initially assumed that image center (c x , c y ) coordinates and the ratio of scale factors are known. Methods for estimation of , c x and c y are described in references [25, 26]. The results from calibration will be the estimated values of the intrinsic and extrinsic parameters.

38

3.3 Computation of 3-D Coordinates from a Calibrated Camera


a. USING ONE CAMERA AND KNOWN Z WORLD COORDINATE After performing camera calibration we get the intrinsic and extrinsic parameters of a camera, which can be used to compute the 3-D position of a point whose coordinates are known in the image plane and whose world z w coordinate is known. Rearranging equations 3.16 and 3.17 we obtain:

xi x x x (1 + kr 2 )r7 r1 ]x w + [ i (1 + kr 2 )r8 r2 ] y w + [ i (1 + kr 2 )r9 r3 ]z w = t x i (1 + kr 2 )t z fx fx fx fx y y y y [ i (1 + kr 2 )r7 r4 ]x w + [ i (1 + kr 2 )r8 r5 ] y w + [ i (1 + kr 2 )r9 r6 ]z w = t y i (1 + kr 2 )t z fy fy fy fy

These are simultaneous equations of the type: a11 x w + a12 y w + a13 z w = b1 a 21 x w + a 22 y w + a 23 z w = b2 Now, if we know the value of z w , these equations simplify to two simultaneous equations of two unknowns, which can be easily solved to obtain x w , y w .

b. USING STEREO VISION Two calibrated cameras can be used to compute the complete 3-D coordinates of a point whose image coordinates in both the camera frames are known. For two cameras we have the following equations.

39

xi x x x (1 + kr 2 )r7 r1 ]x w + [ i (1 + kr 2 )r8 r2 ] y w + [ i (1 + kr 2 )r9 r3 ]z w = t x i (1 + kr 2 )t z fx fx fx fx y y y y [ i (1 + kr 2 )r7 r4 ]x w + [ i (1 + kr 2 )r8 r5 ] y w + [ i (1 + kr 2 )r9 r6 ]z w = t y i (1 + kr 2 )t z fy fy fy fy x x x x [ i (1 + k r 2 )r7 r1]x w + [ i (1 + k r 2 )r8 r2 ] y w + [ i (1 + k r 2 )r9 r3]z w = t i (1 + k r 2 )t z x f x f x f x f x y y y y [ i (1 + k r 2 )r7 r4 ]x w + [ i (1 + k r 2 )r8 r5 ] y w + [ i (1 + k r 2 )r9 r6 ]z w = t i (1 + k r 2 )t z y f y f y f y f y

where, primed parameters are for camera 2 and non-primed parameters are for camera 1. These are simultaneous equations of the type a11 x w + a12 y w + a13 z w = b1 a 21 x w + a 22 y w + a 23 z w = b2 a31 x w + a32 y w + a33 z w = b3 a 41 x w + a 42 y w + a 43 z w = b4 These are four simultaneous linear equations with three unknowns. These can be solved by the linear least squares method using the pseudo-inverse to compute the solution with least mean square error. Note that in both the methods we have assumed that the image coordinates are the same as the computer representation of the image coordinates. But in reality they are related by following relation:

xi = x f c x yi = y f c y
where x f , y f are the computer representation of the image coordinates.

40

3.4 Automated Robot/Camera Calibration


So far we have discussed the mathematical aspects of camera calibration. Now we will discuss the actual method that was involved in calibration of our camera with the robots base frame. The base frame of the robot was used for camera calibration because ultimately we want to get the 3-D coordinates of points in the robots base frame.

3.4.1 Generation of 3D Points in World Frame for Calibration The robot was used to generate random 3D points for calibration poses. Our tool with a Light Emitting Diode (LED) was used as an end-effector of the robot (see figure 2.1). The 3-D position of the LED was computed using robots kinematics (see chapter 2). For generation of sample points, a program was used which generated random points within the camera view frame. These positions were recorded and stored in a file. While performing the calibration, the robot was sequenced through these positions automatically.

3.4.2 Generation of 2D Image Coordinates Live video stream from the video camera was captured. For each position of the robot, a snapshot of the illuminated LED was taken in a darkened room. Images were thresholded, resulting in the LED corresponding to the only non-zero pixels. Centroids of the LED images were computed, which served as our 2D image

41

coordinates. This step was performed by an automated calibration routine. The following standard algorithm was used for centroid calculation [28]: 1. Threshold the image using a threshold T. 2. Compute the centroid of the white pixels using the formula

x=

x P
i =1 N i

P
i =1 i

y=

y P
i =1 N

P
i =1

where, N is the number of white pixels and Pi is the pixel intensity value for ith pixel. Here, white pixels are the pixels with pixel value greater than threshold.

3.4.3 Calibration Computation After computing the corresponding 2-D image coordinates for each 3D position of the LED, the algorithm discussed in section 3.3 was used for computing the calibration parameters. The calibration parameters were saved for the subsequent 3-D coordinate computations. The simplified algorithm is: 1) Read the next position from the file and command the robot to move to that position. 2) Capture a snapshot of the LED from a video camera. 3) Process the image to obtain the 2-D coordinates of the LED.

42

4) Store the 2D image coordinates. If there are more positions, go to step 1, otherwise go to step 5. 5) Compute the calibration parameters using the recorded 2D image coordinates and the corresponding stored 3-D world coordinates. Store the parameters in a file.

The calibration program also performed the synchronization of a user-interface workstation, which also captured video from the video camera, and the robotcontroller workstation, which controlled the robots positioning. The two workstations communicated through TCP/IP sockets. The robot-controller workstation communicated with the robots servo controller through a serial port.

Figure 3.4 Communications between Different Hardware Components

3.4.4 Calibration User-Interface Software The automated calibration includes a graphical user interface, as shown in figure 3.5. The display portion of the interface displays the thresholded image from

43

live video stream (see figure 3.4). The Connect button is for setting up network communications with the client. The Gather Data button commands the robot to successively go to all the positions and invokes computation of the centroid position for all those positions. The Calibrate Button performs the calibration process discussed in section 3.3 on the recorded data and then stores the calibration data in a file. The Exit button exits the calibration program.

Fig 3.5: Calibration Software Interface, Only the Target LED Survives Thresholding

44

3.5 Results
A set of 80 data points was used for camera calibration. The image size was 720x486 pixels. The camera view-port was about 20cmx15cm. The calibration parameters identified from data were recorded. With these values of calibration parameters, the calibration data was consistent with the identified model to the following extent:

Image plane error in pixels: Mean = 0.95 Standard deviation=0.59, Maximum=2.48

Object space error in millimeters: Mean=0.252 Standard deviation=0.158 Maximum=0.672 Two more accuracy tests were done. In the first test, world x,y,z coordinates were given as input, then image coordinates were computed from the identified camera model. In the second test, x,y image coordinates and z world coordinate were given as input, and x,y world coordinates were computed from the calibration parameters. The results are summarized in tables 3.1 and 3.2

45

Computed Image X 539.83 431.70 522.63

Computed Image Y 324.16 92.22 234.25

Actual Image X 539.92 432.45 521.61

Actual Image Y 324.27 92.11 235.69

Error 0.14 0.76 1.74

Table 3.1: Accuracy Test 1. (All dimensions are in pixels)

Computed world x 2327.43 2311.39 2362.07

Computed world y 739.60 698.91 745.73

Actual world x 2327.27 2311.11 2361.92

Actual world y 740.44 699.27 745.85

Error 0.86 0.45 0.19

Table 3.2: Accuracy Test 2. (All dimensions are in millimeters)

The desired accuracy in locating a tumor for radiation therapy is 2mm. From the results in table 3.2 it is clear that the camera calibration precision is within desired limits.

46

4. TREATMENT SIMULATION: PHYSICAL COMPONENTS

A major intent of the experimental testbed is to mimic actual conditions for lung tumor treatment. The main elements can be summarized as follows: Choice of phantom. Way to record the results of gating. Generation of trajectory for phantom to mimic tumor motion due to respiration. Choice of proxy for indirect measure of tumor location inside the phantom.

The following sections will discuss these elements in detail.

4.1 Phantom and Film


A phantom is an experimental target made of material, such as plastic that is transparent to the radiation beam. A small cubical phantom was used to act as the target (see figure 4.1). The phantom consists of alternating polystyrene and radiosensitive-film slabs. The plastic and films are stacked together using a set of 4 screws and bolts. The films are Kodak Xomat-V types. This film is sensitive to both radiation as well as normal light, so when it is developed there is darkening in the portions where the film is exposed or irradiated. The amount of darkening is determined by the amount of exposure to radiation or light, so it can be used as an indirect measure for target coverage. These films can be analyzed by film scanning hardware/software to obtain the iso-coverage lines. In this process, the films were optically scanned to obtain tranmissivity vs. x and y direction. The transmissivity

47

was converted to equivalent radiation dose vs. x and y. This dose distribution was analyzed to find contours at equivalent dose (isodose lines). Before the films can be used, they have to be cut to a size that fits inside phantom. Since these films are sensitive to visible light, the cutting and stacking of the films has to be done inside a dark room. A jig was made to ease the process of cutting the films in the dark room. For 2-dimensional experiments, only one film was used inside the phantom, and for the 3-dimensional tests, multiple films were alternated with plastic within the phantom.

Figure 4.1: X-Y Table and Phantom under Treatment Beam Source.

48

4.2 Emulation of Target Motion


To create a realistic scenario for the gated treatment of lung tumors, the phantom had to be moved in space, imitating the actual movement of a tumor with respiration. For the generation of motion a computer controlled X-Y table was used (see figure 4.1). A two-axis motion controller board was used to interface the X-Y table with a PC through the ISA bus. An actual tumor motion plot (figure 1.6) was used to design the target trajectory. The X-Y table was controlled by a programmable motion-controller, which accepts quadrature input from the encoders on the x and y axes, computes servo feedback calculations, and outputs corresponding analog voltage signals to the DC motors. For this purpose we used a mini-PMAC, which is a two-axis, ISA bus motion-controller board for PCs running Windows 95 or 3.1. The mini-PMAC comes with software with a user-friendly graphical user-interface, which can be used to: Configure the PMAC board for applications including setting PID gains, DC output voltage range, maximum velocity, maximum acceleration bounds for motion programs and jogging, etc. Edit, download, upload and run motion programs. Perform simple jogging and homing operations on the X-Y table.

49

Tumor trajectories were programmed using the controllers PVT (position, velocity, time) trajectory specification format. In PVT moves, the user specifies the values for destination position, destination velocity and time to be taken to reach that position. From the specified parameters for each such move piece and the beginning position and velocity (from the end of the previous piece), the PMAC computes a third-order position trajectory path to meet the constraints. This results in a linearly changing acceleration, a parabolic velocity profile, and a cubic position profile for each trajectory segment (see figure 4.2). The PVT mode is useful for creating arbitrary trajectory profiles. It provides a building-block approach to put together parabolic velocity segments to create whatever overall profile is desired (see figure 4.3).

Figure 4.2: Parabolic Velocity Curve for PVT Moves

50

Figure 4.3 Describing a Contour in Segments of PVT moves

The PMAC controller is put into PVT mode with the program statement PVT {data} where {data} is a constant, variable or expression, representing the piece time in milliseconds. A PVT mode move is specified for each axis to be moved with the statement of the form {axis}{data}:{data}, where axis is a letter specifying the axis, the first {data} is a value specifying the end position or the piece distance and the second {data} is a value representing the ending velocity. For example, the command: PVT200 X9000:150 specifies the XY table should move its X-axis 9000 units with an ending velocity of 150 units /sec in time 200 ms. Two different trajectories were used to generate motion for the XY table. One was simple to and fro motion in one dimension, and the other one was 2 dimensional

51

motion, imitating figure 1.6. Figure 4.4 shows a position-time plot of the generated motion. The position dimension in the plot is in encoder counts, where 4000 encoder counts = 1 cm.

Figure 4.4: Generated Trajectory, Position Vs Time

4.3 Proxy for Target Location


A surface-mounted light-emitting diode (LED) was used as a proxy for an indirect measure of the target location inside the phantom. The LED was mounted on the top surface of the phantom, as shown in figure 4.5.

52

Figure 4.5: Surface Mounted LED Used As Proxy

A 9-Volt battery was used to power the LED. A CCD video camera was used to capture real-time images of the LED in dark surroundings. In dark surroundings, the illuminated LED acted as a high-contrast proxy, images of which could be easily thresholded in real time. The location of a hypothetical spherical target was defined with respect to the LED center. The 3-dimensional location of the LED proxy was deduced based on a calibrated video camera, as discussed in chapter 3.

53

5. TREATMENT SIMULATION: SOFTWARE COMPONENTS

The final elements of the experimental testbed are a graphical display of computed target and beam coordinates and real-time gating of the treatment beam. The aim is to graphically indicate relative coordinates of the tumor and beam in a consistent, easily interpreted display, and to permit gating control over the beam interactively. To perform gating, two methods were introduced: human-in-loop gating, where a human performs the gating via a key press using live graphical images of the target and beam as feedback, and automated gating, where a computer performs the gating using a simple control algorithm. A graphical user interface was developed to allow the user to add various features, including choosing the beam portal size, color, choosing tumor size, choosing automatic or manual gating, etc. Further, cumulative target coverage (exposure) was also shown on the screen in real time, which assisted the operator in selectively gating the exposures in under-covered portions of the target. Software solutions addressing these needs are described in the following sections.

5.1 Hardware Platform


For the project, two workstations were used: a Silicon Graphics Incorporated (SGI) O2 desktop workstation and a Silicon Graphics 1440 desktop workstation. The O2 was used for the purpose of grabbing the video input and integrating all the software units in one user-interface. The 1440 workstation was used to control the

54

robot through a serial link with the robot controller. In addition, the two workstations communicated through a network link with each other. In this thesis we refer to the O2 workstation as the user-interface workstation and 1440 workstation as robotcontroller workstation. The O2 has an add-on video-digitizing unit, which was used to capture the video stream. SGI has developed a standard application-programming interface (API) called the Digital Media Library [29] for dealing with multimedia, such as video grabbing. All SGI workstations are optimized for OpenGL, an open 2D and 3D graphics standard [30, 31]. For the purpose of this project only the 2D elements of OpenGL were used. For user interface design, O2 supports OSF-Motif as well as the lower level X-window system [32]. In addition, SGI has developed a C++ version of Motif called Viewkit [33, 34]. For this project we used RapidApp [35], a GUI builder that supports the creation of both Motif and ViewKit user interfaces.

5.2 Real-time Computation of Target Coordinates


The phantom was placed on the X-Y table, and motion emulating lung tumor motion was produced using the motion program described in the chapter 4. The surface-mounted light-emitting diode (LED) on the phantom was driven by a 9-Volt battery. A black and white video camera with a zoom lens was used to capture live video images of this proxy (LED) against a dark background. The video stream (RS170) was digitized using a frame grabber board inside the Silicon Graphics O2

55

workstation. The video images were captured and buffered using software library functions by Silicon Graphics for the O2 [29]. Once an image is stored in a buffer (array), it can be processed using standard image processing techniques. To enable real-time image processing, images of the LED were taken in dark surroundings, which produced high contrast images. These high contrast video images were thresholded, and the centroid of the LED was computed for each captured frame. The algorithm that was used is described in the following:

1. Choose a small threshold, scan every 5th vertical and every 5th horizontal line in the image, and find the first pixel whose value is greater than the threshold. Mark the pixel location as (x,y) 2. Threshold a small window around (x, y) of 50x50 pixels with a larger threshold. 3. Compute the centroid using the formula
x= 1 N

x(i)
i =1 N

1 y= N

y(i)
i =1

where x, y are the x and y coordinates of the centroid, N is the number of white pixels (white pixels are the pixels whose gray scale value is greater than the threshold) and
x(i ), y (i ) are the x and y coordinates for the ith white pixel.

The next step performed by the software is the computation of 3D coordinates of the LED in the robots base frame. The camera is calibrated with respect to the

56

base frame of the robot using the method discussed in chapter 3. The calibration parameters are used to compute the 3-D coordinates of the LED using the method described in section 3.4. However, these equations require a known world Z coordinate of the LED. For that, the robot calibration tool (used in the calibration, see figure 2.1) was jogged so that the tool tip touched the surface mounted LED on the phantom. The program used the known world Z coordinate along with the image X and Y coordinates to compute the 3-D world coordinates of the LED. Finally, the computed target coordinates must be displayed in a frame permitting an easy visualization of registration with respect to the treatment beam. We chose a beams eye view, in which the beam axis is normal to the display. The tool frame used for camera calibration (see chapter 2) can be used for the purpose of locating the target in the beams eye view, because the x-axis of this frame is coincidental with the beam axis. Equation 2.1 can be used to solve for P7 / 6 , which is the position of a point in the tool frame. P7 / 0 = P6 / 0 + R6 / 0 P7 / 6 The only difference here is that subscript 6 refers to the tool frame used for camera calibration, not the robots default tool frame. Since the x-axis of this frame is parallel to beam, the y and z coordinates in this frame give the location of the target from the beams eye view.

57

5.3 Graphical Interface


The software components were integrated using a graphical interface. The graphical interface consisted of three parts: Controls, Menu and display (figure 5.1).

Figure 5.1 Treatment Software Interface

5.3.1 Display This portion of the window was used to graphically display tumor and beam coordinates in real time. This is used as feedback by the operator to gate the beam. OpenGL was used for performing graphics operations. The display consisted of two overlapping regions: main drawing area and overlay.

58

The main drawing area was used to draw a moving circle representing the cross-section of a spherical tumor. Intensity values of the pixels inside the circle represented the target coverage values. The higher the intensity of pixels, the higher was the cumulative irradiation of that part of the target. This display of distribution inside the tumor assisted the operator in adjusting the gating in real-time to perform preferential gating in under-dosed parts of the target. The overlay was used to draw a circle representing the cross-section of the beam portal. OpenGLs overlay feature was used, which prevented the unnecessary redrawing of the beam whenever the tumor circle was redrawn. An overlay has the property that only the non-black pixel values are drawn on the frame buffer. The main drawing pixel values are drawn wherever the overlay has black pixels. (See figure 5.1)

5.3.2 Controls This part of the interface had button and menus for controlling various functions, as described in the following: 1. Mode radio button This button was used to select between manual and automated gating. If manual gating was selected, the gating could be performed by pressing <ENTER>. If automated gating was selected, then the computer performed the gating. In automated gating a simple gating criteria was used. It was: shoot the beam when the

59

distance between the beam center and the target center is less than a threshold. In our experiment we chose a distance threshold value of 2.5mm. 2. Display buttons Display Tumor and Display Beam buttons were used to display the target and the beam on the main drawing area and the overlay respectively.

3. Node selection A Stereotactic feature was added to treatment simulation. A node file was used to store node direction values. The user could select a node number from the selection box, and press the go button to command the robot to that node location. Before using this, the network connection between the server (O2) and client (IRIS 4D) must be established, which is made possible by the connect button.

4. Connect button This button was used to start the server on the user-interface workstation. This step is followed by starting a client program present on the robot-controller workstation. This socket-based connection was necessary in order to be able to command the treatment beam to gate or to command the robot to go to new node location. The robot was not directly controlled by the O2, but it was controlled by the Cyberknife robot-controller workstation. The network connection between the two workstations was used to give indirect control over the robot by the O2 (See figure 3.3).

60

5. Exit button This button was to stop all the processes and exit the program.

6. DCH button This button allowed the user to draw the coverage area histogram (CAH) for target and non-target areas (see figure 5.4). The Y-axis of coverage area histogram is the percentage of target covered and the X-axis is the exposure time.

5.3.3 Menu There are two menu items, tumor and beam, which were used to select size and color options for the target and beam. Selecting the options submenu item from the target or beam menu items brought about two dialog boxes. These dialog boxes can be used to select different options, including size, color and target offset (distance of target from LED) for target and beam.

5.4 Beam Control


A child process was created to give the ability to control the beam. This process waited for input from the keyboard. Upon pressing <ENTER> in manual mode (the automatic gating is performed by computer), then the beam was turned on. A separate process was created because the parent process, which creates the graphical user interface and captures live video stream, is blocked until video stream is present. The child process performed two functions:

61

To gate the beam when the operator presses <ENTER> (in manual mode of gating)

Terminating the video capture when the user enters a particular key from the keyboard.

Pressing <ENTER> sends a character message through the network to the client, which is recognized by the client as a command for gating the beam. The client then sends a message to the robots controller through serial port, commanding the accelerator to turn the beam on for a short duration of time. Obviously, the network connection must be set before the operator can use this feature. With each press of <ENTER>, the beam turns on for a pre-specified amount of time (50 ms in our case). Terminating the video capture gives control back to the user interface, which can then be used to perform other functions, such as selecting a different node and selecting different gating mode.

5.5 Node Generation


A separate program was developed only for the purpose of selecting the nodes for treatment. Here, node means a direction from which the beam irradiates the target. From the results in chapter 2, it is clear that there is an inherent tool frame calibration error associated with each tool direction. This means that the actual tool tip coordinates do not exactly match the coordinates displayed by the robot controller.

62

To use the tool frame for the purpose of computing the beam direction and hence the tumor location in the beams eye-view, we have to compensate for that error, as well as any error due to misalignment between the tool frame and beam axis. For this purpose we utilize an alignment laser beam present in the accelerator. The laser beam is co-linear with treatment beam, so it can be used for beam direction adjustments. For selection of a node, the robot is jogged to a direction with the laser turned on so that the laser beam falls exactly on the surface mounted LED proxy on the phantom. The direction coordinates computed by the robot controller are recorded. Additionally, the location of the LED in the beams eye view is computed based on an image from the calibrated camera. Ideally, if there were no error the computed location of the center of the LED in the beams eye view would be exactly at the center of the beam. This is not the case, due to calibration imprecision. The differences between the computed and ideal values are stored as offsets in a file along with the actual robot coordinates. When the treatment program subsequently uses this node, it adds these offsets to the computed coordinates to compensate for the error.

63

Figure 5.2: Node Generation Software Interface

To ease the process of node calibration correction, a graphical interface tool was developed (see figure5.2). Steps involved in generating nodes are:

1. Jog the robot so that the laser beam falls exactly on the LED 2. Press the button Add Node on the software interface, which performs the following operations: a. Gets the 3-D coordinates of the robot tool using the network connection, b. Captures the image of the LED from the calibrated camera, thresholds and computes the LED centroid as explained in section 5.2. c. Computes the location of the LED center in the beams eye view and stores the resulting value in a file along with the 3-D coordinates reported by the robot.

64

3. To add more nodes, repeat 1 and 2.

5.6 Summary of Treatment Simulation Protocol


To perform an experimental evaluation of IDRT, the functions described in this chapter are performed in the following order (see also figure 5.3): 1. Place the phantom on the X-Y table so that the LED is visible from the cameras view. 2. Select the different node directions using the method discussed in section 5.4. 3. Run the motion program on the PMAC motion controller board. 4. Start the graphical user interface on the user-interface workstation. 5. Start the server by pressing Connect 6. Start the client program on the robot-controller workstation. 7. Select beam size, color, mode of gating and offset of target. 8. Select a particular node from the node selection menu and press go to command the robot to go to that node location. 9. If automatic gating mode is selected, then wait for delivery of the desired dose (dose value is displayed on a interface provided by Cyberknife manufacturer), then enter keyboard input q to stop video transmission. If manual gating is selected, press <ENTER> to give each single dose shot when desired. 10. Select a different node and perform steps 7-9 until done with all the nodes. 11. Press the Exit button on the software interface to exit the treatment program.

65

Figure 5.3: Block Diagram Description of Treatment Simulation

5.7 Beam Size Selection


A separate simulation program was developed to perform a statistical analysis of the effects of beam size, tumor size and distance threshold. The program generated simulated random trajectories of a target/tumor on the screen. Tests involving automatic gating were performed for different values of beam size, tumor size and trigger-distance threshold. The analysis was to see how much healthy tissue was irradiated, when the entire target had received the desired exposure. An arbitrary target exposure requirement was chosen. When every pixel in the target had received

66

this minimum required dose, the beam-on times (exposures) for pixels in the nontarget area were recorded A set of 20 experiments was done for each of the 13 cases of beam size and trigger-distance threshold. The mean and standard deviation for each of the resulting values were computed. Here, trigger-distance threshold is the distance between the tumor center and the beam center when the beam is switched on. An arbitrary required exposure time (100 units) for each target pixel was chosen. A dose-unit was defined as the percentage of exposure time relative to the required exposure time. Two performance measures, measure 1 and measure 2, were defined to evaluate the exposure to healthy tissue. Measure 1 and measure 2 are the number of pixels of non-target area which receive dose-units greater than 75 and greater than 100, respectively. The simulation results are summarized in tables 5.1 and 5.2 and figures 5.4a, 5.4b and 5.4c.

67

Beam radius 5.1 5.6 6.1 6.6 7.1

Dose-unit Mean std. dev 218 24 163 12 129 7 107 1 100 0

Measure 1 mean std. dev 310 59 399 60 491 62 592 33 796 37

Measure 2 mean std. dev 112 71 139 63 156 80 114 43 0 0

Table 5.1: Statistical Results for First Set of Values for Target Size and Distance Threshold (Target radius = 5.0mm, Size =1257 pixels, Distance threshold = 2mm).

Beam radius 5.1 5.6 6.1 6.6 7.1 7.6 8.1

Dose-units

Measure 1

Measure 2 mean std. dev 193 107 253 99 164 97 218 123 145 57 0 0 0 0

Mean std. dev Mean std. dev 264 34 192 17 141 10 122 8 106 1 100 0 100 0 424 86 537 82 537 79 669 97 760 57 997 64 1341 71

Table 5.2: Statistical Results for Second Set of Values for Target Size and Distance Threshold (Target radius = 5.0 mm, Size =1257 pixels, Distance threshold = 2.5 mm).

68

Figure 5.4a: Coverage Area Histogram, Beam Radius =5.1, Distance Threshold =2mm (Vertical axis = % of pixels, horizontal axis = % of required exposure time)

Figure 5.4b: Coverage Area Histogram, Beam Radius =6.1, Distance Threshold =2mm (Vertical axis = % of pixels, horizontal axis = % of required exposure time)

69

Figure 5.4c: Coverage Area Histogram, Beam Radius =7.1, Distance Threshold =2mm (Vertical axis = % of pixels, horizontal axis = % of required exposure time)

From the tables 5.1, 5.2 and figures 5.4 a, b and c, we can see the number of cells (pixels) in the healthy tissue which are irradiated is smaller for smaller beam sizes. However, the number of pixels getting dangerously high dose generally increases with decreasing beam size. For the case when Beam Size = Target Size + Distance Threshold, the entire tumor is inside the beam when beam is on. This means that all the tumor pixels get the full-duration exposure when beam is on. As seen in figure 1.1, there is an optimal dose, at which most of tumor tissue is affected while most of the healthy tissues is relatively unaffected. Assigning this dose level to our (arbitrary) 100 dose-units of exposure, it is apparent that our measure 2 number of pixels in healthy tissue receiving > 100 dose-units is the more important measure. Using this logic, the optimal value for beam size is:

70

Beam Size = Target Size + Threshold Increasing the beam size further also achieves measure 2=0 while the tumor is fully exposed, but more healthy tissue gets irradiated. Further tests were conducted, in which the beam size was kept fixed at the deduced optimal value, and the distance threshold was changed. The values for healthy tissue exposure and percentage of beam time utilization were recorded as reported in table 5.3. Here, percentage of beam time utilization is the percentage of time the beam was turned on compared to total treatment time.

Distance Threshold

Percentage of beam time utilization

# of Non-Target Pixels Getting exposure > 75 beam-units 418 625 1026 1243

1.0 1.5 2.5 3.0

1.6 4.0 6.0 10.3

Table 5.3 Resulting Values for Non-Target Coverage and Beam Time Utilization with Change in Distance Threshold

From the results in table 5.3, its clear that if the distance threshold is decreased then less healthy tissue gets irradiated, but at the same time the treatment duration also

71

increases very rapidly.

For example, for a distance threshold of 1.0mm actual

treatment would be performed less than 2% of the time the patient is on the treatment table. Based on this analysis, we chose a threshold of 2.5mm for our experiments.

72

6. RESULTS AND CONCLUSIONS

Radiosensitive films were used to record the results of IDRT experiments. These films were subsequently scanned optically for analysis. From the scanned images, isodose lines and dose area/volume histograms were computed and drawn. Isodose lines are contours corresponding to points of equal relative (percentage) dose. Commercial analysis software was used to compute and plot the isodose lines. The dose area/volume histograms are the plots of percentage area (or volume) vs. percentage dose. Here, dose is calculated from the pixel values (optical densities) of the scanned image assuming a linear inverse relationship between dose and pixel value (transmissivity of the developed film). The target area is the number of pixels inside the target (a presumed tumor with circular cross-section). In an ideal case 100% of the target receives 100% of the desired dose, and 0% of non-target receives 0% of prescribed dose. In practice, the distortions and divergence of the beam, nonconformality of beam shape with target shape and penetration of the beam through the active depth of the 3-D body results in significant deviation from this ideal dose area/volume histogram. A software program was developed to read image files and draw dose area histograms given the location and size of a presumed circular target in the image.

73

6.1 Results
In the first experiment, the beam was turned on while the target was moved in space using the x-y table. This was done to illustrate the effects of realistic target motion during treatment. Figure 1 shows the result for this experiment. The darkening of the corners of the film is due to light leaking through the corners of the phantom film holder. For developing, the films had to be attached to a bigger film, because the developing hardware could not develop small films. This was achieved using cellophane tape, residues of which can be seen in the films (e.g. see figure 6.1)

Figure 6.1: Exposed Film with No gating

In a second set of experiments, a target with single-axis motion was considered, and gating was performed using both a tele-operated human-in-the loop approach and the automated approach (discussed in chapter 5). A two-dimensional

74

target of circular shape was considered. The direction of the beam was kept nearly perpendicular to the target. The results for manual gating are shown in figures 6.2a, 6.2b and 6.2c. The corresponding results for automatic gating are shown in figures 6.3a, 6.3b and 6.3c. In this experiment the presumed target had a diameter of 7.5 mm and beam portal diameter of 12.5 mm, and the distance threshold for automatic gating was 2.5 mm.

Figure 6.2a: Exposed Film for Manual Gating, 1-D Target Motion

75

Figure 6.2b: Isodose Lines, Manual Gating, 1-D Motion

Figure 6.2c: Dose Area Histogram, Manual Gating, 1-D Motion

76

Figure 6.3a: Exposed Film for Automated Gating, 1-D Motion

Figure 6.3b: Isodose Lines, Automatic Gating, 1-D Motion

77

Figure 6.3c: Dose Area Histogram, Automatic Gating, 1-D Motion

In a third set of experiments, a more realistic two-dimensional tumor motion was considered (see chapter 4) and results were again taken for both the human-inloop approach and the automated approach. Again the direction of the beam was perpendicular to the target. The same target, beam and threshold parameters were used. Figures 6.4a, 6.4b and 6.4c show the results for manual gating, and figures 6.5a, 6.5b and 6.5c show results for automated gating.

78

Figure 6.4a: Exposed Film for Manual Gating, 2-D Motion

Figure 6.4b: Isodose Lines, Manual Gating, 2-D Motion

79

Figure 6.4c: Dose Area Histogram, Manual Gating, 2-D Motion

Figure 6.5a: Exposed Film for Automated Gating, 2-D motion

80

Figure 6.5b: Isodose Lines, Automated Gating, 2-D Motion

Figure 6.5c: Dose Area Histogram, Automatic Gating, 2-D Motion

81

In a fourth set of experiments, a three-dimensional tumor was considered. To achieve this, multiple films were stacked in different slots of the phantom. The shape of the target was considered to be spherical so that the projection of the target on each slice/film was a circle. Five different nodes (different directions of the beam) were used for treatment. The directions were: one normal to film, four at an elevation angle of approximately 30 degrees from vertical with roughly uniform azimuthal spacing (i.e., every 90 degree rotation about z axis). Dose-Volume Histograms were drawn to see the target and non-target coverage. Figure 6.7a shows the 9 stacked films with their z coordinates. The z coordinate is the distance from the center film i.e. center of target. The target diameter was specified to be 10mm (Note: the target was chosen larger for this 3-d case to get exposure results spread over more film layers). The results shown are automatic gating with beam diameter = 15 mm and trigger threshold = 2.5mm.

82

Figure 6.6a: 9 Stacked Films for 3-Dimensional Tumor, 2-D motion, Automatic Gating from 5 Beam Approaches

83

84

Figure 6.6b: Isodose Lines for 9 Stacked Films (1 to 9), 2-D Motion, Automatic Gating from 5 Beam Approaches

85

Figure 6.6c: Dose Volume Histogram for Stacked Films, 2-D Motion, Automatic Gating, 5 Beam Approaches

From figure 6.6a, it can be seen that all the 5 beams converge on the center film. The beams diverge more and more with increase in distance from the center. This leads to lower dose to the healthy tissue while focussing more dose on the target. In the final experiment, treatment was performed using no image feedback. The treatment was performed using a method similar to the present practice used for treating lung tumors. A margin of ~1.5 cm was considered around the tumor boundary, and the beam radius was increased by this margin. The tumor was assumed to have a diameter of 15mm, and the beam diameter used was 20mm. The

86

target was moved in 2-d using the same pattern as in past experiments. This experiment was performed to compare traditional vs. image-directed gating methodsThe direction of the beam was perpendicular to the stacked films, so each of the stacked films were exposed to similar levels of radiation. Figures 6.7a through 6.7c show the results for these experiments.

Figure 6.7a: Exposed Film for Traditional Non-Gated Treatment, 2-D Motion and 1 Approach Direction

87

Figure 6.7b: Isodose Lines for Traditional Treatment Example

Figure 6.7c: Dose Area Histogram for the Traditional Treatment Example

88

6.2 Conclusions
The aim of this work was to experimentally explore the use of image-directed radiation therapy (IDRT) in the field of lung cancer treatment using the Cyberknife treatment system. Video images of a proxy were used as feedback to perform gated radiation therapy on a phantom. The treatment testbed required system calibration and development of realtime displays and controls. System calibration required identification of coordinate transformations (1) between the tool-frame and the robot-frame, (2) between the camera frame and the tool frame. The results in chapter 2 and 3 clearly indicated that though there is uncertainty of more than 5mm in calibrating the tool-frame with respect to the base frame, the coordinate transformation between the camera coordinate system and the robot coordinate system was very accurate (~0.3mm). The inaccuracy in calibration of the robot tool with respect to the robots base frame is attributed to inaccuracy of the robots kinematic model, which has been provided by the manufacturer of the robot. To achieve real-time display of tumor and beam images using position information from the calibrated video camera, a graphical interface tool was developed. The tool performed three main functions: displaying relative beam and target coordinates in real time, providing means to select different features of the beam and the target, and performing beam gating. The beam gating was performed using two methods: manual, where gating was performed by key-press, and automated, where gating was performed by the computer based on a simple criterion.

89

The beam was controlled using a series of communications links between the different hardware components: a network link between the user-interface workstation and the robot-controller workstation and a serial link between the robotcontroller workstation and the robot controller. Even with the latency involved with these communications, control over the beam was fast enough to perform the gating in real time. The results in section 6.1 show that both the manual and the automated gating methods result in very similar target coverage. The experiments also clearly indicated that the treatment performed using image-directed gating results in much lower radiation exposure to healthy tissue then traditional treatment while delivering comparable dosage to the target tumor tissue. The use of more than one node (beam approach direction) results in even lower dosage to healthy tissue. From the results obtained in this study, an image-directed radiotherapy system with gating using the Cyberknife appears to be technically feasible. The results clearly indicate that ability to manually or automatically gate the radiation beam during the treatment permits a reduction in the size of the beam. With a smaller beam, the radiation received by the target can be increased while decreasing the radiation that normal tissue receives.

90

6.3 Future Work


The physical treatment simulation performed in this study had certain limitations and assumptions: 1. Very simple tumor shapes, spheres and circles, were considered. 2. It was assumed that the 3-D position of the tumor could be deduced from the position of a proxy. Further, it was assumed that the relation between the proxy and the tumor remains constant with time. 3. To get the 3-D position, only one camera was used instead of a stereo-camera system. This was due to both the hardware constraints associated with the workstation as well as due to image-processing timing constraints. 4. A simple proxy was used for the experiment. The proxy was a high contrast lightemitting diode in a dark background, which could be identified using a simple thresholding algorithm. This simplification was introduced to satisfy critical timing associated with getting real-time image feedback. In real scenarios, a more complex proxy has to be considered. Moreover, more than one proxy must be considered to include non-symmetrical tumor shapes.

Much work needs to be done before the system can be used for actual treatments. In this study a proxy was used to get the indirect positional information of tumors. The motion of a tumor was assumed to be two-dimensional. To have more flexible three-dimensional motion, a stereo-camera system instead of one camera has to be included. Present computer hardware limits the use of two cameras simultaneously,

91

plus there are timing issues involved in using two cameras. The use of dedicated image-processing hardware with on-board processing (like thresholding) within a host workstation may be a solution. Such a machine vision board should accomodate both the simultaneous cameras inputs as well as provide fast on-board image processing. With fast image processing, one or more complex proxies could be used. Other imaging modalities could be used for getting the positional information of tumors. High quality portal X-ray imaging might be a solution. However, much needs be done before the imaging quality of portal images can be used for tumor segmentation. Amorphous silicon detectors seem to be a promising technology [36]. Another technique could be use of the invasive magnetic or radioactive seeds inside the tumor, though this method compromises the objective of non-invasiveness. Studies are being done at the Cleveland Clinic to find a functional relationship between fiducial marks on the chest of a patient and the position of a lung tumor. Studies involving chest bands, indelible ink marks on the chest, etc. are being performed to find the relationship between respiratory motion of tumors and external indicators. Instead of the use of images for the proxy, other methods can be used to provide positional feedback of tumors due to respiratory motion. Selective gating can be performed during particular points in the respiratory cycle, such as during maximum exhale or inhale. In the present study, gating was used to perform the treatment. Though the gating achieves desirable results in terms of minimizing irradiation of healthy tissue,

92

it has a fundamental disadvantage in that it requires a longer treatment time. This is because the target is within the beams path only for a fraction of the time. A solution to this problem can be use of tracking, i.e. following the target with the beam turned on. The robot in the present Cyberknife treatment system is not designed to perform tracking. Though the maximum speed of the robot is much higher (1m/sec) than typical tumors (1cm /sec), the system still cannot be used for tracking due to inherent latency of communication between the control hardware and software. With the availability of faster hardware the system may be used for this purpose.

93

APPENDIX 1

RAC-Based Camera Calibration Algorithm [23]


Let us assume that the image center coordinates (c x , c y ) and the ratio of scale factors are known. The camera calibration algorithm consists of two stages In the first stage, the rotation matrix R and the translational parameters t x and t y are computed. In the second stage, the remaining camera parameters are estimated using the results of the first stage.

STAGE 1: COMPUTATION OF THE ROTATION MATRIX R AND THE TRANSLATIONAL PARAMETERS t x AND t y 1. Computation of image coordinates ( xi , j , y i , j )

Let N be the number of calibration points. Then for j=1,2,N

xi , j = y f , j c x yi , j = y f , j c y
where, x f , j and y f , j are the computer representation of the image coordinates for the jth calibration point. 2. Computation of the intermediate parameters {v1 , v 2 , v3 , v 4 , v5 } Since RAC is independent of the lens distortion coefficient k and the focal distance f , the rotation matrix R and the two translation components t x and t y may be solved from the RAC equation.

94

Define {v1 , v 2 , v3 , v 4 , v5 } {r1t y , r2 t y , t x t y , r4 t y , r5 t y } For the j th calibration point, dividing both sides of equation 3.20 by t y and rearranging the resulting expression, yields
1 1 1 1 1

[x

w, j

yi , j

y w ,i y i , j

yi, j

x w, j xi , j

y w , j x i , j

v1 v 2 v 3 = x i , j v 4 v 5

a.1

where x w, j and y w, j are the x and y world coordinates of the jth calibration point. The minimum number of non-collinear calibration points needed to solve this equation is N=5. In practice, however, N>>5 and one uses the equations by the linear least-squares algorithm.

3.

Computation of R, t x and t y It is now possible to express the unknown parameter t y in terms of the

intermediate parameters v1 to v5 obtained in the previous step.

Step 1. Define C to be a 2x2 matrix as follows

v C 1 v 4

v2 v5

95

If no row or column of C identically vanishes, it can be shown that


ty =
2

S r S r2 4(v1v5 v 4 v 2 ) 2 2(v1v5 v 4 v 2 ) 2
2

where, S r v1 + v 2 + v 4 + v5 . Otherwise, the solution is simply


2 2 2

t y = (vi + v j ) 1
2 2 2

where, vi and v j are the nonzero elements in the appropriate row or column of C .

Step 2. Determination of the sign of t y Physically, the signs of x w and xi as well as y w and y i should be consistent, which can be seen from the simple geometry conveyed by figure 3.1. This condition can be utilized to determine the sign of t y , denoted in short sign( ty). Starting with an assumption that t y >0, we first calculate parameters
r1 = v1t y r2 = v 2 t y r4 = v 4 t y r5 = v5 t y t x = v3t y

Picking up an arbitrary calibration point, we then compute the following coordinates, x = r1 x w + r2 y w + t x y = r4 x w + r5 y w + t y

96

If both sign( x )=sign( xi ) and sign( y )=sign( y i ), then sign( t y )=consistent, and r1 , r2 , r4 , r5 and t x are retained. Otherwise we let sign( t y ) be negative and reverse the sign of r1 , r2 , r4 , r5 and t x accordingly.

Step 3. Computation of R There are exactly two possible solutions for a rotation matrix given its 2x2 sub-matrices. These two sets of solutions will produce two different values of f x , one that has a negative sign and the other having a positive sign. Because the sign of f x must be positive, one can use this condition to eliminate one of the solutions of R . The first attempt could be

r3 = (1 r1 r2 )
2 2

r6 = sign (r1 r4 + r2 r5 )(1 r4 r5 )


2 2

[r7

r8

r9 ] = [r1
T

r2

r3 ] [r4
T

r5

r6 ]

Where denotes a vector cross product. If this results in a positive f x , which is computed in Stage 2 using the solved rotation matrix, we retain this solution. Otherwise, we reverse the sign of f x , and modify the solution to compute the rotation matrix as follows:

r3 = (1 r1 r2 )
2 2

r6 = sign (r1 r4 + r2 r5 )(1 r4 r5 )


2 2

97

[r7

r8

r9 ] = [r1
T

r2

r3 ] [r4
T

r5

r6 ]

The resulting computed R may not be orthonormal. It is necessary to apply an orthonormalization procedure to R .

STAGE 2: COMPUTATION OF t z , k , f x AND f y Taking R, t x and t y to be known, one is able to estimate the remaining parameters, t z , k , f x and f y . From equation a.1 for the jth calibration point,

[ x
here

i, j

xj

x j rj

tz f =x w i, j j x kf x

a.2

x j r1 x w, j + r2 y w, j + t x w j r7 x w, j + r8 y w, j
Whenever the number of object calibration points is greater than three, an over-determined system of linear equations is established that can then be solved by a linear least-squares algorithm for the known k, t z and f x . After f x is obtained, we compute the other intrinsic parameters,
fy = fx k = (kf x ) f x
1

98

BIBLIOGRAPHY

1. American Cancer Society, Cancer Facts & Figures-1999, www.cancer.org. 2. Cancerbaccup, Cancer Treatments, www.cancerbaccup.org.uk/info/cervix.htm. 3. Weinhous, M.S.; Brady, L.W. Developments in Technology, A History of the Radiological Sciences, pp 43-87. 4. Schwartz; Emanual, E. The Biological Basis of Radiation Therapy. Philadelphia, J.B. Lippincott Company, 1966. pp. 210-216 5. Gildenberg, P.L. The Hstory of Stereotactic Neurosurgery, Neurosurgery Clinics of North America, pp 765-80, 1990. 6. Roberts, D.W. Stereotactic Technology in Tumor Surgery: Concepts and Principles, Journal of Neuro-Oncology, pp 281-4, 1995. 7. Galloway, R.L. Maciunas RJ, Stereotactic Neurosurgery, CriticalRreviews of Biomedical Engineering, pp 181-205, 1990. 8. Calvo, F.A.; et al. Stereotactic Radiosurgery with Linear Accelerator, Rays, pp 462-85, Jul-Sep, 1998. 9. Nestos, A.; et al. The Gamma Knife: Neurosurgery without an Incision, AORN 961-71 974-5 978-81, Apr, 1990. 10. Mark, P.; Carol, M.D. Peacock: A System for Planning and Rotational Delivery of Intensity-Modulated Fields, International journal of Imaging Systems and Technology, pp56-61, 1995 11. Adler, J.R. Jr.; et al. The Cyberknife: a Frameless Robotic System for Radiosurgery, Stereotactic Functional Neurosurg, pp 124-8,1997 12. Chang, S.D.; et al. Clinical Experience With Image-Guided Robotic Radiosurgery (the cyberknife) in the Treatment of Brain and Spinal Cord Tumors, Neuro Medico Chirurgica (Tokyo), pp 780-3, Nov, 1998

13. Galloway, R.L.; Jr, Maciunas, R.J.; Edwards Interactive Image-Guided Nerurosurgery, IEEE Trans Biomed Eng, pp 1226-31, Dec 1992.

99

14. Adler, John, R.; et al. Image-Guided Robotic Radiation Surgery, Proceedings of the Iinternational Symposium an Medical Robotics, 1994. 15. American Lung Association. Facts about Lung Cancer, www.lungusa.org/learn/lungcanc.html, 1997. 16. Macklis R. Unpublished Radiation Oncology Research, The Cleveland Clinic Foundation, Cleveland, 1996. 17. Prescribing, Recording and Reporting Photon Beam Therapy. ICRU Report 50, International Commission on Radiation Units and Measurements. 18. .Ameduri, S.A. Simulation of Image-Directed Radiation Therapy With Human Interaction, Technical Report TR 97-107, CAISR, Case Western Reserve Univ., 1997. 19. Asada, H.; Slotine, J.-J. Robot Analysis and Control, John Wiley & Sons, Inc., New York, 1986 20. Abdel, Y.I.; Karara, H.M. Direct Linear Transformation Into Object Space Coordinates in Close-Range Photogrammettry, Proc. Symp, Close Range Photogrammetry (Urbana IL), pp 1- 18, Jan 1971.

21. Ganpathy, S. Decomposition of Transformation Matrices for Robot Vision, Proc IEEE Int Conf. Robotics Automation (Atlanta), pp 130-139, Mar 1984. 22. Weng, J.; Cohen, P.; Herniou M. Camera Calibration with Distortion Models and Accuracy Evaluation, IEEE Tran. on Pattern Analysis and Machine Intelligence, pp965-980, 1992. 23. Tsai, R.Y. An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision, IEEE computer Vision and Pattern Recognition, 1987. 24. Tsai, R.Y. A versatile Camera Calibration Technique for High Accuracy 3D Machine Vision Metrology Using Off-the Shelf TV Camera and Lenses, IEEE Journal of Robotics and Automation, pp 324-344, 1987. 25. Lenz, R.K.; Tsai, R.Y. Techniques for Calibration of the Scale Factor and Image Center for High Accuracy 3-D Machine Vision Metrology, IEEE Transactions of Pattern Analysis and Machine Intelligence, vol PAMI-10, No5, 1987.

100

26. Tsai, R.Y. Techniques for Calibration of The Scale Factor and Image Center for High Accuracy 3D Machine Vision Metrology, IEEE Proceedings on the International Conference on Robotics and Automation, 1987. 27. Tsai, R.Y.; Lenz, R.K. A New Tachnique for Fully Autonomous and Efficient 3D Robotics Hand/Eye Calibration, IEEE Transactions on Robotics and Automation, Vol 5, 1989. 28. Haralick, R.M.; Shapiro, L.G. Computer and Robot Vision, Vol I, Addison Wesley, 1992. 29. Silicon Graphics, Incorporated. IRIS Digital Media Programming Guide. 1994. 30. Neider, J.; Davis, T.; Woo, M. OpenGL Programming Guide. Reading: Addison-Wesley, 1993. 31. OpenGL Architecture Review Board. OpenGL Reference Manual. Reading: Addison-Wesley, 1992. 32. Silicon Graphics, Incorporated. IRIS IM Programming Guide. 1993. 33. Silicon Graphics, Incorporated. IRIS ViewKit Programmers Guide. 1994. 34. Young, Doug. ViewKit: An Application Framework for C++ and Motif, The X Advisor. Volume 1 Number 1, 1995 35. Silicon Graphics, Incorporated. Developer Magic: RapidApp Users Guide. 1996. 36. Antonuk, L.E. Megavoltage Imaging with a Large-Area, Flat-Panel, Amorphous Silicon Imager, Intl Journal of Rad. Onc., Bio, Phy, pp 661-772, 1996

Anda mungkin juga menyukai