Anda di halaman 1dari 17

IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL.

2, 2009

23

Three-Dimensional Ultrasound: From Acquisition to Visualization and From Algorithms to Systems


Kerem Karadayi, Student Member, IEEE, Ravi Managuli, Member, IEEE, and Yongmin Kim, Fellow, IEEE
Methodological Review

AbstractOne of the key additions to clinical ultrasound (US) systems during the last decade was the incorporation of three-dimensional (3-D) imaging as a native mode. Compared to previous-generation 3-D US imaging systems, todays systems offer easier volume acquisition and deliver superior image quality with various visualization options. This has come as a result of many technological advances and innovations in transducer design, electronics, computer architecture, and algorithms. While freehand 3-D US techniques continue to be used, mechanically scanned and/or two-dimensional (2-D) matrix-array transducers are increasingly adopted, enabling higher volume rates and easier acquisition. More powerful computing engines with instruction-level and data-level parallelism and high-speed memory access support new and improved 3-D visualization capabilities. Many clinical US systems today have a 3-D option that offers interactive acquisition and display. In this paper, we cover the innovations of the last decade that have enabled the current 3-D US systems from acquisition to visualization, with emphasis on transducers, algorithms, and computation. Index Terms3-D ultrasound, 4-D ultrasound, algorithms, systems, transducers, volume rendering.

I. INTRODUCTION INCE its advent more than half a century ago, diagnostic ultrasound (US) imaging has undergone signicant transformations. It started out with one-dimensional (1-D) amplitude mode (A-mode), in which tissue structures only along a single scanline could be depicted or tracked over time (motion mode or M-mode), but transitioned over to two-dimensional (2-D) imaging with the introduction of brightness mode (B-mode). Many other advances followed in the 1960s and 1970s, including continuous-wave [1] and pulsed Doppler US [2] and real-time B-mode US [3]. Color Doppler imaging was introduced in the 1980s [4], enabling real-time visualization of blood ow in 2-D. Digital beamforming instead of analog beamforming [5] came along in the 1990s, completing the digitalization of ultrasound machines from scan conversion to beamforming. These have led to substantial improvements in image quality and diagnostic utility, establishing 2-D US

Manuscript received July 01, 2009. Current version published December 01, 2009. K. Karadayi and Y. Kim are with the Departments of Electrical Engineering and Bioengineering, University of Washington, Seattle, WA 98195 USA (e-mail: kerem@u.washington.edu; ykim@u.washington.edu). R. Managuli is with the Department of Bioengineering, University of Washington, Seattle, 98195 USA and with Hitachi Medical Systems of America, Twinsburg, OH 44087 USA (e-mail: ravim@u.washington.edu). Digital Object Identier 10.1109/RBME.2009.2034132

imaging as a routine clinical exam preferred for its safety, cost-effectiveness, portability, and interactive visualization. Similar to 2-D US, the concept of three-dimensional (3-D) US was rst demonstrated in the 1950s [6] and has long been proposed to overcome the limitations inherent in 2-D imaging of 3-D anatomy. Such limitations include the difculty in analyzing structures lying in planes other than the original planes of acquisition, estimating volumes from 2-D-only measurements with geometrical assumptions, and challenges in obtaining the same views in longitudinal studies. Over the course of 3-D US development, various approaches were used in acquiring US volume data. For example, Howry et al. [6] collected volume data from objects embedded in water baths using a mechanical assembly that translated a single-element transducer up and down while the transducer is oscillating sideways to acquire 2-D cross-sections at varying heights. Brinkley et al. [7] used a freehand approach, in which a mechanically scanning 2-D imaging probe was swept manually in a direction perpendicular to the imaging plane of the probe to acquire a series of arbitrary 2-D scans covering a 3-D region. They placed spark gaps on the US probe as sound sources to acoustically determine the position/orientation of the probe with six degrees of freedom, which was later used to register each 2-D scan plane into a 3-D Cartesian volume. A lot of research in the 1980s and 1990s went into improving acquisition and visualization as well as demonstrating the clinical utility of 3-D US imaging. Freehand techniques during this time utilized tracking via magnetic [8], optical sensors [9], or mechanically articulated arms [10]. Also, sensorless methods were used in applications where geometric accuracy was not deemed critical. Speckle decorrelation between consecutive frames was also proposed as a way to track probe motion without a position sensor [11]. Freehand techniques became popular because they were relatively inexpensive to implement and allowed unconstrained scan geometries (user-dened volumes). Dedicated 3-D probes were introduced in the 1990s, with the two prevailing technologies of mechanical probes [12] and 2-D matrix-array transducers [13]. However, 3-D US remained mostly conned to research settings in the 1990s due to tedious and lengthy acquisition and reconstruction, suboptimal image quality, and lack of a clearly demonstrated added diagnostic value [14]. More innovations in transducer design, electronics, computer architecture, and algorithms occurred in the last ten years. Mechanical 3-D probes became more compact and faster, and 2-D

1937-3333/$26.00 2009 IEEE

24

IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

Fig. 1. Fetus 34 weeks old with bilateral cleft lip visualized using (a) conventional 2-D US imaging and (b) 3-D US volume rendering. It is very difcult to visualize bilateral cleft lip in conventional 2-D image, but very easy to diagnose using 3-D US.

Fig. 2. MPR and volume-rendered image of a uterus. Arcuate uterus (indentation on the superior aspect of the endometrial cavity) can be clearly visualized in the coronal (lower left) and the volume-rendered (lower right) images. This view is very difcult to obtain in the conventional longitudinal (upper left) and transverse (upper right) views.

matrix-array transducers evolved from sparse arrays to fully sampled arrays to deliver higher image quality. Advances in computer architecture and semiconductor technology enabled more powerful processors that support instruction-level and data-level parallelism and high-performance input/output (I/O) capabilities. As a result, it became possible to support new and better 3-D visualization algorithms, reconstruct volumes as they are acquired, and provide visualization at interactive rates. These advances considerably eased some of the technical challenges experienced by previous-generation 3-D US systems. Current-generation 3-D systems deliver superior image quality with better visualization and offer easier acquisition with signicantly reduced scan times. Four-dimensional (4-D) (3-D time) imaging is now possible on certain systems because

of the ability to capture and process 3-D volumes fast enough (e.g., 20 volumes/s) to visualize 3-D anatomy with its motion. As a result of these developments, a new wave of clinical research in 3-D/4-D US was initiated. The originally claimed benets for going to 3-D imaging are now being veried, and new uses are being demonstrated. For example, in obstetrics, 3-D views were shown to be superior to 2-D US in evaluating certain fetal anomalies, such as a bilateral cleft lip (Fig. 1), a cleft secondary palate [15], [16], or in the assessment of fetal ribs [17], [18]. In gynecology, 3-D US was shown to be helpful in assessing congenital uterine anomalies [19][21], such as an arcuate uterus (Fig. 2). In echocardiography, the ability to obtain en face (forward facing) views of the mitral valve from the left atrium, which was not possible prior to using 3-D US, has

KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND

25

Fig. 3. Transducer technologies used for 3-D US acquisition: (a) mechanical 3-D probes, (b) 2-D matrix-array transducers, and (c) freehand 3-D acquisition using a conventional 1-D array with position sensor.

been found to facilitate the assessment of valvular disease [22]. Also, workow improvements have been reported as a result of reduced scan times because one volume acquisition can replace several planar acquisitions. One application that was shown to benet from the reduced scan time is exercise stress echocardiography, during which measurements from multiple scan planes need to be made in a relatively short amount of time following the exercise to be as close to peak stress as possible [23]. In such examination, availability of full volume datasets was also shown to facilitate better alignment of the same views for baseline and peak-stress measurements. There are quite a few review papers on recent clinical experience with 3-D/4-D US [22], [24][39]. Also, several review papers on the technical aspects of 3-D US were published from 1996 to 2001 [40][44]. In this paper, we focus on the recent technological developments that have taken place in volume acquisition and visualization, including transducers, algorithms, and computation. We also discuss the remaining challenges to be overcome in 3-D US and the possible future directions of 3-D US technology.

II. VOLUME ACQUISITION The development and maturation of dedicated 3-D probes has played a major role in enabling easier and faster acquisition of volume data. These include mechanical 3-D probes and 2-D matrix-array transducers. In addition, freehand techniques have improved, and they continue to provide a lower cost alternative to dedicated 3-D probes.

A. Mechanical 3-D Probes A modern mechanical 3-D probe consists of a 1-D array transducer and a compact motor coupled together and placed inside the probe housing [Fig. 3(a)]. The motor translates, rotates, or wobbles the 1-D transducer back and forth to insonate a 3-D volume of interest. The constrained scan geometry of a mechanical probe makes registration of the acquired 2-D images simpler because the position and/or orientation of the transducer can be determined readily. Early generation mechanical probes were slow (e.g., it took 4 s to acquire a 40 volume sector in 1993 [45]). They have evolved throughout the last decade to become more compact and faster, delivering volume acquisition rates typically several volumes/s or higher depending on the sector size. This not only has resulted in a reduction of artifacts due to tissue/patient motion but also has enabled fast enough acquisition of volume datasets to enable interactive visualization. Systems based on mechanical probes are cheaper to support than those based on 2-D matrix-array transducers because they use a 1-D array transducer and the beamforming requirements are same as those used in conventional 2-D imaging. They also provide comparable image quality in the acquisition plane to that delivered by the conventional 2-D imaging. As a result, mechanical 3-D probes are increasingly being adopted for 3-D US volume acquisition, particularly in OB/GYN applications. Their use in cardiology applications, however, has been limited due to low volume acquisition rates. Also, since acquisitions are performed while the transducer is in motion and color/power ) Doppler imaging requires an ensemble of echoes (typically to be obtained along the same direction for each scanline, their volume rates are severely restricted in Doppler modes.

26

IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

B. Two-Dimensional Matrix-Array Transducers Two-dimensional matrix-array transducers were rst introduced in the early 1990s [13]. They steer an ultrasound beam electronically in two perpendicular directions (azimuth and elevation) to insonate a typically pyramid-shaped volume [Fig. 3(b)]. No moving parts are involved, and parallel receive beamforming techniques can be utilized in both lateral directions (azimuth and elevation) to acquire multiple scanlines (e.g., 4 4) for each transmitted ultrasound beam [46]. Therefore, they can achieve higher volume rates than mechanical probes. While high volume rates come at the expense of lowered image quality due to broader transmitted beams and artifacts arising from multiple scanline formation per transmitted beam [47], 3-D echocardiography has considerably beneted from the availability of 2-D arrays. On the other hand, the construction of 2-D matrix-array transducers and supporting them on 3-D US systems involve many challenges [48]. In order to be practically usable in echocardiography, the footprints of 2-D array transducers need to be similar to those of conventional 1-D array transducers to allow intercostal views of the heart without rib shadows. This implies that a large number of 2-D array elements need to be tted in the same area as that of a 1-D array transducer (e.g., 64 64 elements instead of only 256 elements). The smaller element size and interspacing present difculties in electrical connection of each piezoelectric crystal element within the transducer head. Smaller element size also results in lowered capacitance and hence increased electrical impedance mismatch between the element and the coaxial cable that interconnects the element to the US system [49], [50]. Many preampliers and matching circuitries in the probe housing are used to overcome the poorer transmission efciency and improve the signal-to-noise ratio (SNR), which leads to bulkier probes. Another challenge involving 2-D arrays is 3-D beamforming. Because focusing and steering are performed in both azimuthal and elevational planes, a 2-D set of delays is required. This substantially increases the required number of channels over what would be required for a 1-D array. For example, to achieve the similar beamforming quality as in a 1-D array using 64 channels, 4096 channels would be necessary. This not only makes beamforming computationally very expensive but also is prohibitive in that the number of connections needed between the probe and the US machine is practically limited by the size and weight of the interconnecting probe cable. All these challenges led the earlier 2-D array transducers to contain a smaller number of elements, (e.g., 32 32 instead of 64 64), with only a subset of elements being active at the same time (i.e., sparse arrays) to overcome the beamforming complexity. Since only a smaller portion of the transducer aperture is used to transmit and receive, focusing was compromised, resulting in suboptimal beam shapes with large sidelobes [51]. Smaller apertures also lowered the sensitivity further. Consequently, the 2-D image quality achievable using the rst-generation 2-D array transducers was considerably inferior to that typically provided by the conventional 1-D array transducers. It took another decade (i.e., the early 2000s) for fully sampled arrays to become practical [52]. In these, a subarray beamforming approach was taken, where all elements were used to

transmit and receive but beamforming was split into two stages: ne delays and summation between the signals received by immediate-neighboring elements were implemented on compact analog electronics placed inside the transducer head, whereas larger delays and nal summations were implemented digitally in the main beamformer unit inside the US machine. This keeps the number of active channels and connections to the beamformer unit to a manageable number while at the same time enabling realization of better beam proles with reduced side lobes. Also, because all the elements are used to transmit and receive, sensitivity is improved over sparse arrays. This has resulted in considerable image quality improvements over sparse 2-D arrays. Because they can focus equally in both the lateral directions (elevation and azimuth), 2-D arrays have the advantage of improved elevational resolution over 1-D arrays. While fully sampled 2-D arrays offer signicantly better image quality than the earlier sparse arrays, the azimuth-plane resolution and sensitivity are still inferior to that of conventional 1-D arrays as of today. C. Freehand Scanning While mechanical 3-D probes and 2-D matrix-array transducers are increasingly used for volume acquisition, freehand techniques continue to remain a less costly alternative. In sensor-based freehand acquisition, a position sensor is attached to the conventional 1-D-array probe [Fig. 3(c)], and image and sensor data are acquired simultaneously. The 3-D US volumes are then visualized following geometric registration of arbitrary 2-D slices into 3-D Cartesian coordinates and reconstruction. Sensorless acquisition, where registration is performed based on an assumption of the scan geometry (e.g., linear translation at constant speed), is also used when geometric accuracy is not critical. Since no strict restrictions are imposed on the eld of view, freehand techniques are advantageous in acquiring large or irregular volumes if low volume rates can be tolerated. Furthermore, little or no motion of the anatomy should occur during scanning. Because they can be used with almost any probe, including high-frequency linear arrays, they are used for 3-D vascular, musculoskeletal, or small parts imaging. In addition, they are being used in several commercial products (e.g., Hitachis Real-time Virtual Sonography or GEs Volume Navigation) for interactive registration of preoperative computed tomography (CT) or magnetic resonance imaging (MRI) volumes to intraoperative ultrasound images [53], [54] to aid in interventional procedures. For example, in radio-frequency (RF) ablation of hepatic tumors, preoperative CT images can be used for delineating malignant nodules. However, intraoperative US images can track tissue deformation as well as visualize RF-ablation target area. Therefore, interactive fusion of images from both modalities during the surgical interventions can combine the benets of both modalities [55]. Much research in freehand techniques in the last decade focused on improving image quality by tackling various error sources in the registration of 2-D scan planes to 3-D volumes. These include the sensor calibration [56], tissue deformation during scanning [57], and sensor noise in estimating probe position/orientation [58]. A number of algorithms have also

KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND

27

Fig. 4. 3-D processing ow for volume visualization via planar and volume views.

been proposed for reconstructing 3-D volumes from freehand acquisitions, a detailed review of which can be found in [59]. Research in more accurate, robust, and easy-to-use freehand techniques continues. More recent freehand approaches include tracking using less obtrusive microelectromechanical systems (MEMS) sensors [60], optical ber-based sensors, and hybrid approaches where multiple tracking techniques are utilized (e.g., speckle decorrelation MEMS [61]). Progress made in techniques not requiring an obtrusive sensor is encouraging. However, such techniques need to become more robust and accurate. Current techniques do not work well when certain transducer motion conditions are not met or images do not contain fully developed speckles. III. VOLUME VISUALIZATION Once an ultrasound volume is acquired using a mechanical probe, a 2-D matrix-array transducer, or freehand scanning, the resulting volumes can be visualized in a multitude of ways following reconstruction. These can essentially be grouped into two categories: 1) planar views; 2) volume views. Planar views are similar to conventional 2-D ultrasound views. Even though 3-D data are acquired, 2-D cross-sections are displayed to the user. On the other hand, volume views integrate information from an entire volume into a single image to provide en face views of the underlying 3-D anatomy, i.e., more like how a surgeon would see during a surgery. Todays 3-D US systems support volume visualization via both planar and volume views. A 3-D processing ow similar to shown in Fig. 4 is typically used for this purpose, which follows a pipeline that involves acquisition via transducers, front-end beamforming, and back-end signal and image processing (Fig. 5). The rst task in 3-D processing is volume reconstruction, which registers and resamples acquired 2-D US slices into

a 3-D Cartesian volume. If freehand acquisition is used, the information from a position sensor, an assumption about the scan geometry (e.g., linear transducer translating at constant speed), or an estimate of scan plane displacement based on speckle decorrelation techniques is needed in volume reconstruction. The known scan geometry of mechanical probes or 2-D array transducers simplies the volume reconstruction process to 3-D scan conversion (3-D SC), in which data acquired in the scan geometry of the probe are resampled into a Cartesian volume via explicit coordinate transformations (typically from polar coordinates). In the following sections, we discuss in detail the planar and volume views along with the various processing components (following volume reconstruction) that are needed to generate such views. An acquisition/visualization technique known as spatiotemporal image correlation (STIC), which can generate both planar and volume views, is also discussed. A. Planar Views In the earlier attempts at 3-D ultrasound, y through images were provided to the user by displaying the original 2-D slices of acquisition in sequential order. The only way the user could interact with the volumes was to go back and forth in sequence and mentally construct a 3-D impression of the underlying anatomy. Measurements (e.g., volume of an organ) could also be made on individual cross-sections and then estimated. As more powerful computers came along, arbitrary slicing of an acquired volume to display any cross-sectional plane independent of the original acquisition direction was made possible. Generally, two types of planar views are supported on todays 3-D US systems: orthogonal planar reconstruction and parallel planar reconstruction. In the rst one, three planes that are orthogonal to each other are reconstructed and displayed to the user at the same time. This is usually denoted as multiplanar reconstruction (MPR). While the three MPR planes remain orthogonal to each other, they can typically be shifted with respect to each other, so they intersect the acquired volume and each

28

IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

Fig. 5. Overall 3-D US imaging pipeline: acquisition with transducers, front-end beamforming, back-end signal/image processing, and 3-D processing.

Fig. 6. An example illustrating parallel planar reconstruction of a fetal heart. This mode is also denoted as TUI or MSV on commercial 3-D US machines. (Reproduced with permission from [62].)

other at different positions. Also, they can all be rotated around so that they cross-sect the volume at different angles. The visualization of arcuate uterus (normal-shaped uterus but with an indentation on the superior aspect of the endometrial cavity) in Fig. 2 is a good example, demonstrating the use of MPR in gynecology. In the case of parallel planar reconstruction, multiple (e.g., 9 or 12) sequential cross-sections that are parallel to each other are reconstructed and presented to the user simultaneously (Fig. 6). This view is similar to the diagnostic views frequently used in the interpretation of MRI or CT images and is usually denoted as multislice view (MSV) or tomographic ultrasound imaging (TUI) [62]. In terms of algorithms, there is not much difference in how the planar views are reconstructed in MPR, in TUI, or as a single cross-sectional plane. Once the geometry of a plane is dened via a plane equation in 3-D space, the input coordinates in the ultrasound volume corresponding to each pixel on the planar output image are computed (inverse mapping). The output pixel is then computed via an interpolation of the neighboring voxels. In general, conventional 2-D images generated by a 1-D transducer are superior in image quality to planar images reconstructed from a 3-D volume. The anisotropic resolution of ultrasound (different axial, azimuth, and elevation resolutions) has an adverse effect on the image quality of planes resampled from volume data in different directions. When simple kernels (e.g., trilinear interpolation) are used during resampling to save

computation, this could cause additional blur and aliasing. Furthermore, when a plane to be visualized is not oriented along the direction of insonation (direction in which the ultrasound beams are transmitted and received), various US artifacts manifest themselves differently than in conventional 2-D US imaging, which could make them difcult to recognize [43]. For example, a shadowing artifact, which is easily distinguishable in conventional 2-D US imaging, could be mistaken for a hypoechoic region in the planar images reconstructed from a 3-D volume. Also, spatial compounding and multiple focal depths, which are image optimization options for 2-D imaging, are not practical for a mechanical 3-D probe due to its continuous motion. Until now, the image quality improvements in planar views have mainly come from higher quality volume acquisition provided by better transducers, more sophisticated US signal/image processing pipeline, and use of higher order interpolation kernels (e.g., cubic interpolation instead of linear) enabled by increased computing power. Also, as US systems become more programmable and exible in signal/image data paths and more powerful processors with better I/O capabilities are used, direct resampling of planar views from pre-SC data (i.e., bypassing the 3-D SC (Fig. 4) is possible, which improves the image quality further [63]. B. Volume Views Many improvements in volume views occurred in the last decade. In volume views, each pixel value on the output image is determined by casting a ray from that pixel into a volume based on a viewing transformation (Fig. 7). The voxel information encountered along the path of each ray is then integrated into the pixel value based on several different techniques. When 3-D visualization transitioned from an ofine postprocessing step on an external computer to a mode natively supported on the US machine [64], volume views were initially generated based on intensity projection techniques. For example, with maximum ntensity projection (MIP), the maximum voxel value along the ray was used, whereas minimum-intensity projection (mIP) used the minimum voxel value. Alternatively, summation or averaging was performed along a ray to obtain X-ray-like projections. While they continue to be used in todays 3-D US systems in some cases [e.g., to visualize a fetal spine (MIP)], a major advance in the last decade has been the incorporation of more realistic, opacity-based volume-rendering techniques. These are based on optical models [65] that delineate surfaces

KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND

29

Fig. 7. Volume rendering using direct ray casting. Voxels along each ray are resampled via a trilinear interpolation of eight neighboring original voxels.

and convey depth information better, such as the one shown in the example of Fig. 1(b) in the visualization of fetal cleft lips. Opacity-based volume rendering is discussed in more detail below, followed by fast algorithms for interactive rendering, preprocess ltering to improve rendering image quality, and multivolume rendering techniques to visualize blood ow along with tissues. 1) Opacity-Based Volume Rendering: Levoys paper in 1988 [66] established the basis of opacity-based rendering algorithms for visualization of 3-D datasets in medical imaging, including ultrasound. Two key operationsvolume classication and shadingform the basis of his technique to achieve high-quality renderings. Volume classication is the process of determining the visibility of each voxel based on a decision of whether that voxel belongs to an anatomical section of interest to the user (tissue, organ, surface) or not. Volume shading is the process of assigning colors to voxels based on the orientation of surfaces to improve depth perception in the nal output image. Following volume classication and shading, the resulting opacities and colors are projected onto a 2-D image plane via ray casting. This projection operation using voxel opacities/colors is usually denoted in computer graphics literature as compositing. a) Volume classication: In volume classication, opacities are assigned to voxels in a volume dataset. Opacity is an optical property that indicates what proportion of incoming light each voxel in a volume blocks and what proportion it passes through. It also determines how much light is scattered back from each voxel. Most medical volume datasets, including ultrasound, are not real optical data. Therefore, the opacities used in volume rendering are not true optical opacities of the dataset to be visualized but features assigned by the user to achieve varying visual effects. This way, structures of interest can be made more prominent through the assignment of opacities, whereas background structures could be made less visible. The assignment of opacities to voxels is carried out via the denition of transfer functions (TFs) that map the scalar voxel values to opacity values. The design of an optimum opacity TF for a given dataset is an important task in achieving high-quality rendering. Highlighting the diagnostically relevant structures while suppressing

less relevant details inside a volume via a selection of TFs can be tedious. Because of this, piecewise-linear TFs are commonly used to simplify the design of TFs and their adjustments via a handful of control points. Many 3-D ultrasound systems offer several preset TFs that have been optimized by the manufacturer for different clinical scenarios, and the user can select one from the available set. Several approaches have been proposed for adaptively designing TFs either automatically or semiautomatically [67]. TF design for US data is more challenging because they lack clear and strong boundaries, except where the acoustic impedance mismatch between two adjacent layers is signicant, as in tissue-uid interfaces (e.g., fetus versus amniotic uid and blood versus vessel walls). This, combined with the presence of speckle noise even within a homogeneous structure, makes adaptive TF design very difcult. Honigmann et al. [68] proposed an algorithm for adaptive design of opacity TFs specically for ultrasound volumes to highlight tissueuid boundaries, such as fetusamniotic uid interfaces. They mathematically determined that parabolic TFs provide the best contrast at such interfaces for differentiating tissues with dissimilar voxel intensities (tissue contrast) as well as for recognizing the depth order of similar tissues (similar intensities) at different depths (depth contrast) from the rendered images. They later extended their approach to time-varying volume datasets for 4-D imaging [69]. In both approaches, high-contrast visualization of tissueuid interfaces was targeted. Design of optimal TFs for the visualization of US data that do not possess clear tissueuid boundaries is an area that has not been explored much. Also, while providing better contrast in rendered images is an important and valuable objective, no studies have reported yet on how to relate TF design to extraction of what is clinically more relevant and what provides more diagnostic information. Therefore, there is certainly room for improvement in optimal design of TFs for 3-D US data. In addition to opacity TFs (OTFs), another type of TF also exists in volume rendering. These are called color transfer functions (CTFs). Similar to how an OTF assigns opacity to each voxel, a CTF assigns color. In other modalities, e.g., CT or MRI, a CTF is used to assign different colors to different tissue types to enhance visualization. In ultrasound, such differentiation based solely on voxel values is not trivial. Therefore, in

30

IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

its simplest form, CTFs used in ultrasound have mostly been a linear mapping between ultrasound values (e.g., B-mode magnitudes) and a simple colormap (e.g., shades of gray or other color); thus, they generally are not implemented explicitly. Nevertheless, an advantage of using CTFs in ultrasound is in multivolume rendering, i.e., when B-mode and color/power Doppler volumes are rendered together to visualize blood ow within tissues. These techniques are discussed later. Shading: Shading consists of 1) detection of surface orientations within a volume and 2) color assignment to each voxel based on an illumination model and the orientation of a surface to which the voxel belongs. Detection of surface orientations is typically achieved via volume gradient operation, which returns a gradient vector at each voxel location indicating the direction of the greatest rate of change in voxel values. Normalizing the magnitude of the gradient vector results in surface normal vectors, which are then used in the computation of how the light will reect from each voxel. Once the surface normals are computed for all the voxels, color is assigned to each voxel based on an illumination model. Typically, Phong illumination [70] or its variations are used. 2) Fast Volume Rendering Algorithms: The traditional ray casting (direct ray casting) algorithm employed by Levoy [66] uses rays directly projected from pixels on an image plane, which intersect the 3-D object volume at irregular locations depending on the viewing angle (Fig. 7). This causes voxel sampling locations, dened at equidistant intervals along each ray, to fall in between the original slices of the volume. Therefore, 3-D interpolation is used, requiring voxels from multiple volume slices (e.g., trilinear interpolation uses the eight nearest voxels in the original volume grid; four from the front slice and four from the back slice). Also, input voxel accesses are scattered across the input volume. Such an incoherent data-access pattern is computationally inefcient because memory architectures are optimized to provide high throughput for localized and sequential accesses but suffer from longer latencies in case of random accesses. Therefore, direct ray casting causes many cache and memory page misses, resulting in long latencies. To overcome the drawback of direct ray casting, various fast algorithms have been proposed. The shear-warp algorithm tries to overcome the high computational cost of traditional direct-ray-cast volume rendering by breaking down ray casting into two stages that can be implemented efciently [71]. The main idea behind shear-warp is to process the 3-D data slice by slice on the original volume grid to reduce the computationally expensive 3-D interpolations to 2-D interpolations and also make data accesses coherent by conning them to one slice at a time. Slice-by-slice processing is achieved by mathematically factorizing the viewing matrix, which results in two operations: a volume shear component and a warp component. The shear component aligns all the slices such that the rays intersect them at right angles, and hence sampling locations fall on a regular grid during projection of rays [Fig. 8(a)]. Following projection of each slice onto a 2-D intermediate projection plane, a 2-D nal warp on the projection plane corrects for the distortion introduced by volume shear. A major drawback of the shear-warp algorithm, however, comes

Fig. 8. Fast volume rendering using (a) shear-warp versus (b) shear-imageorder algorithms. Bilinear interpolation is used within each slice to resample each voxel along a ray from the four neighboring original voxels.

from its own advantage, i.e., connement of voxel sampling locations to discrete slice locations. Although this requires fewer operations, it results in poor sampling and aliasing in compositing, particularly for volume data or opacity transfer functions with high-frequency contents. Also, multiple stages of resampling (one during compositing and the other in afne warp) result in loss of sharp details. By upsampling the volume along the ray-casting (compositing) direction and supersampling within each slice, aliasing can be reduced, and loss of sharp details can be minimized [72], although this comes at a cost of increased computation. However, using this technique, the data accesses still remain coherent. Another type of artifact that occurs with shear-warp is the Venetian-blinds artifact. As seen in Fig. 8(a), shearing original volume slices on top of each other results in regions where data exist in one slice for compositing. However, no corresponding data exist for the same ray in the subsequent slices. This leads to parallel stripes of alternating dark and bright colors, i.e., Venetian-blinds artifact [Fig. 9(b)], on some viewing angles. While upsampling and supersampling help reduce the appearance of these artifacts, the Venetian-blinds artifact is a fundamental limitation of shear-warp and is difcult to remove completely. To address the aliasing problem due to the connement of sampling at slice locations, Engel et al. [73] proposed to precompute the opacity and color contributions from each ray segment between discrete slice locations (with an assumption of a continuous linear variation of voxel values within each segment) using a continuous ray-casting integral and store them in a lookup table (LUT) for all possible voxel value pairs at two

KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND

31

Fig. 9. Volume rendering using (a) direct ray-casting, (b) shear-warp, and (c) shear-image-order algorithms without (upper row) and with (lower row) preintegration. Staircasing artifacts (white arrows) due to aliasing can easily be seen in the shear-warp and shear-image-order rendering without preintegration. This artifact is considerably suppressed in direct ray casting because sampling locations are not restricted to fall onto slice locations. The Venetian-blinds artifacts (black arrow) due to sliding of volume slices on top of each other are easily recognizable in shear-warp rendering. These are more pronounced and manifest themselves over the entire volume in preintegrated shear-warp because slabs (pairs of slices) slide on top of each other instead of individual slices. Preintegrated shear-image-order avoids both these artifacts and delivers image quality comparable to direct ray casting.

ends of a ray segment. Schulze et al. [74] later applied preintegration to the shear-warp algorithm. Preintegration of color and opacities results in signicant reduction of the artifacts from sampling at discrete slice locations, eliminating the need for upsampling in the compositing direction. Since the additional overhead of preintegration lookups is minimal, preintegrated volume rendering is computationally more efcient than volume upsampling in the depth direction. Improvement in image quality using preintegration over simple Riemann-sum approximation during compositing can be seen in Fig. 9, where ultrasound volume data were acquired from a fetus phantom using a Hitachi Hi Vision Prerius system equipped with a mechanical 3-D probe and then rendered ofine on a PC using different rendering algorithms. While the aliasing artifacts are effectively reduced using the preintegrated color and opacity LUTs in shear-warp, other drawbacks of shear-warp (e.g., Venetian-blinds artifact and blurring due to two stages of resampling) still remain. The shear-image-order algorithm was proposed mainly to overcome drawbacks associated with shear-warp [75]. It resamples each slice such that the interpolated voxels are aligned with the pixels in the nal image [Fig. 8(b)]. This eliminates the need for the nal afne warp in the shear-warp algorithm, thus preserving the sharp details better. However, unlike direct ray casting, it also maintains the shear-warps memory access efciency by conning the sampling locations to within each slice. Also, the shear-image order algorithm overcomes the Venetian-blinds artifact of shear-warp because no volume shear, i.e., shearing of slices on top of each other, is needed. Instead, each slice undergoes a 2-D shear to correct for the distortion re-

sulting from conning the sampling locations to original slice locations. Since an afne warp has to be performed on each slice, it is computationally more expensive than shear-warp. However, it is still much faster than direct ray casting. The aliasing problem due to sampling at discrete slice locations exists in the shear-image-order algorithm, which can be alleviated via upsampling in the compositing direction, albeit at increased computational cost. To address aliasing without a signicant increase in computation, our group proposed a combination of shear-image-order and preintegration [76]. Fig. 9 illustrates the rendering obtained with each of these algorithms: direct ray casting, preintegrated direct ray casting, shear-warp, preintegrated shear-warp, shear-image-order, and preintegrated shear-image-order. Preintegrated shear-image-order provides a ne balance between image quality and required computation. 3) Filtering: In 3-D US, the quality of volume-rendered images is severely degraded due to the presence of speckle noise. When voxels corrupted by speckle noise are projected onto a 2-D image plane during volume rendering, it produces a grainy pattern that compromises the clear depiction of anatomy in the output image. In addition, speckle noise leads to erroneous gradient vector computation, which signicantly deviates the surface normals of the underlying structures. Volume rendering based on erroneous gradient and noisy data produces poor shading with many dark and bright spots. For this reason, prior to volume rendering, ultrasound volume data are usually preprocessed to improve the SNR. Several different approaches have been proposed in the literature to reduce the undesired effects of speckle noise in volume visualization. Sakas et al. [77] proposed the use of a 3-D median

32

IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

lter followed by a 3-D lowpass lter to reduce speckle and improve the quality of rendered images. A 3 3 3 or 5 5 5 median lter followed by a 3 3 3 low-pass lter produced the best results. Kim and Park [78] proposed to replace the 3-D median lter with a 2-D truncated median lter instead, which was applied on the individual 2-D slices in a volume before the application of a 3 3 3 low-pass lter. A 2-D truncated median lter was more efcient to compute than a 3-D median lter and provided an efcient means to approximate a mode lter, which was proposed as a maximum likelihood estimator for removing speckle in ultrasound images [79]. Shamdasani et al. [80] proposed a dual-lter approach that employs two different and independently controlled lters: a 3-D lter before compositing and a 3-D lter before gradient computation. Filtering the volume data before gradient computation with a moderate-size kernel (e.g., 7 7 7) can suppress speckle noise effectively. This enables better estimation of surface normals for a smoother shading effect that enhances 3-D perception. At the same time, ltering the volume data before compositing using a small kernel (e.g., 3 3 3) achieves a reasonable reduction in noise and improvement in image quality without excessively smoothing out the details. A major drawback of a dual-lter approach, however, is the increased computation cost because the volume needs to be ltered twice. Shamdasani et al. [80] also suggested the use of a 3-D boxcar lter in place of a 3-D low-pass lter (e.g., Gaussian) to reduce the computation time. Although Gaussian lters offer better frequency response characteristics, the differences are barely noticeable when small kernels (e.g., 3 3 3 or 5 5 5) are used. On the other hand, boxcar lters are attractive from a computational standpoint in that no multiplications are required and they can be sped up signicantly (and more or less independent of the kernel size) by moving average techniques [81]. These properties make them attractive when interactive performance is important, e.g., in interactive 3-D ultrasound acquisition and visualization. A number of groups have also applied anisotropic diffusion ltering to preprocess 3-D US data before volume rendering [82][86]. Currently, the computational cost of 3-D anisotropic lters is high for their use in fast implementations on 3-D/4-D systems, while their 2-D counterparts are already being used on some systems for speckle reduction in 2-D imaging. Since most 3-D US pipelines rely on the output of 2-D US pipelines to generate the individual slices comprising a 3-D volume, it should be expected that, on some systems, each slice can readily be preprocessed using such lters. As the computational speed further improves, it may be possible to implement the 3-D versions if the improvement obtained from using 3-D lters instead of 2-D lters is signicant. 4) Multivolume Visualization: Earlier attempts at visualizing blood ow using 3-D US focused on rendering power/velocity data separately from B-mode data using grayscale volume rendering after B-mode and Doppler volumes are combined together into one volume, where Doppler and B-mode data are both represented by grayscale values [87][89]. However, a clear distinction between tissue and blood does not exist in such an approach, except for the different shades of gray

used (e.g., brighter values for blood ow and darker values for B-mode). However, many researchers have shown that clear visualization of anatomical relationships along with blood ow provides valuable clinical information for better diagnosis. For example, Ohishi et al. [90] showed that visualizing anatomical relationships between a tumor and the feeding arteries within or around the tumor is valuable for determining its malignancy in liver and kidney. More recent studies also found that 3-D Doppler ultrasound can be used to differentiate malignant and benign tumors in breast with its ability to quantify tumor blood ow and visualize vascular morphology [91][93]. Three-dimensional power/color Doppler data are typically displayed with 3-D B-mode data to provide their anatomical relationship, and such visualization is enabled by multivolume rendering techniques. Almost all commercially available 3-D US systems today support simultaneous visualization of blood ow and tissues, where grayscale is used to represent tissues and hue colors are used to represent vasculature (similar to the conventions used in 2-D color Doppler display). Various multivolume visualization techniques have been proposed for merging volumes from different modalities (e.g., CT and positron emission tomography) [94][96] and in in situ visualization of the merged data. In ultrasound, multivolume rendering is required to merge data from different modes, e.g., B-mode with color Doppler. Several approaches have been proposed specically for ultrasound multivolume visualization [97][100]. In post fusion, the rendering pipeline is duplicated. Two volumes (B-mode and power) are rendered separately, and the resulting 2-D images are combined later at a postprocessing stage where RGB values are assigned to power/velocity data to differentiate from the B-mode image [98]. While it is attractive for its simplicity because it does not require any modication to the rendering pipeline (except the duplication and postprocessing), post fusion suffers from depth ambiguity between tissues versus vasculature since blending of two volumes occurs after their independent renderings [100], [101]. This can be overcome by composite fusion[97], where tissues and ow data are colored separately and RGB rendering is performed together. In RGB rendering, each color channel is rendered independent of each other. Because tissue and ow data from different depths are composited together, color shifts may result, affecting each of the RGB channels differently. This may cause deviation of colors from the originally intended colors to represent ow. Progressive fusion[100] can overcome this, where the ow and tissue information are composited separately but with opacities updated at each slice location jointly to reect correct depth order. A nal fusion stage combines the rendered results from tissue and ow. Unlike composite fusion, which requires decision of relative weights of tissue and ow during compositing, progressive fusion allows separate control of weights during the nal blending stage, giving users the ability to easily emphasize ow versus tissues. Fig. 10 gives an example of these different approaches, where the data were acquired using a 3-D mechanical probe on a Hitachi Hi Vision 8500 system and then rendered ofine on a PC using the various algorithms discussed.

KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND

33

Fig. 10. Illustration of different multivolume rendering techniques: (a) post fusion, (b) composite fusion, and (c) progressive fusion. Different opacity TFs for B-mode data are used for generating the upper (low opacity) and lower (high opacity) row of images.

In all these rendering approaches, opacity plays an important role in depicting the spatial relationship between vasculature and tissues structures [100]. Images obtained using low and high opacities for B-mode data from the same volume are shown in the upper and lower images in Fig. 10, respectively. From these images, it appears that more transparent opacities for B-mode and opaque opacities for color/power data work well to differentiate the relevant information from tissues and ow in visualizing the vasculature. Another approach that achieves this was proposed by Petersch et al. [99], called silhouette rendering (SR). In SR, both vasculature and tissues are rendered. However, vasculature is emphasized by selectively assigning less weight to tissues that lie in an orientation to occlude the clear visibility of vessels, resulting in more glass-like rendering of tissues in those regions and solid renderings in other areas. Several obstacles remain to be overcome in 3-D blood ow visualization. For example, how to best quantify a Doppler volume and generate reliable indices from it is still an ongoing investigation [102]. How to represent the ow direction in color 3-D is also a challenge that has not been addressed yet. In 2-D color Doppler, the convention used to represent ow towards and away from the transducer using red and blue colors, respectively, does not hold any more as the user rotates the volume. A clutter lter optimized to suppress clutter and color/power Doppler artifacts (e.g., ash artifact) in 2-D may not be optimum anymore when applied to a 3-D volume. For example, while an occasionally generated ash artifact may not be detrimental in 2-D imaging, a similar artifact not suppressed properly in 3-D imaging could render an entire volume unusable [103]. Also, compared to 3-D B-mode imaging, 3-D Doppler imaging suffers more severely from low volume rates because of the need of ensemble data acquisition, which necessitates gated acquisition in more cases than in 3-D B-mode imaging. C. Spatio-Temporal Image Correlation A fetal heart beats at about twice the rate of adult heart (i.e., 120160 bpm), making it impossible to visualize its ne struc-

tures due to motion blur when a mechanical 3-D probe is used. While gated-acquisition techniques based on an electrocardiogram (ECG) signal have been used to minimize motion artifacts in acquiring 3-D volume datasets from adult hearts, fetal ECG is difcult to obtain because of the interference from the stronger maternal ECG, and the placement of ECG electrodes could restrict the movement of the US probe. Therefore, nonECG-based gating would be needed. STIC is an image-based gating method supported by several manufacturers to overcome the low volume rate limitation of the mechanical 3-D probes and enable fetal echocardiography [104][106]. STIC achieves this via performing a very slow sweep (e.g., 0.003 /ms) over the duration of 1525 heartbeats covering the fetal heart, while the 1-D array transducer inside the mechanical probe is continuously acquiring 2-D slices at a high frame rate (e.g., 100150 frames/s). At the end of the sweep, this results in a long sequence of 2-D slices (e.g., 15002500 slices) densely sampling the beating fetal heart in both space and time. In a postprocessing step, each heartbeat is determined based on imagesimilarity metrics (e.g., autocorrelation [107] and cross-correlation [108]), and the slices are partitioned into distinct volumes representing different phases of the cardiac cycle based on their acquisition times relative to a detected heartbeat. Following the partitioning, a series of volumes representing different phases of the heart during one heart cycle are obtained and displayed. STIC may be performed in conjunction with B-mode or color/ power Doppler acquisition [109], [110]. It could be used in the depiction of the main heart structures as well as the great vessels for the diagnosis of congenital heart disease. While fast acquisition and visualization using a 2-D matrix-array transducer is preferable to postprocessing, early studies reported that STIC image quality surpasses the image quality obtained via matrix-array transducers from fetal hearts [111]. This may be attributed to better in-plane image quality of the mechanical 3-D probes. Also, STIC can deliver higher effective temporal resolution than 2-D arrays because the temporal resolution of a reconstructed volume is mainly determined by the 2-D rate of acquisition of the mechanical probes transducer array. For ex-

34

IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

ample, assuming fetal heart rate of 150 bpm and 150 frames/s acquisition by the transducer, there are 60 distinct frames that correspond to each of the scan planes comprising a 3-D volume. This allows reconstruction of volumes for 60 different phases of the heart, leading to an effective temporal resolution of 60 volumes per cardiac cycle. In contrast, a 2-D array acquiring 25 volumes/s can deliver a temporal resolution of ten volumes per cardiac cycle. Three-dimensional ultrasound could play an important role in determining congenital anomaly of the fetal heart. While acquisition with 2-D arrays has been attempted, their use in fetal echocardiography is limited. Mechanical 3-D probes and STIC on the other hand are available on more US machines. Since STIC acquisition is long (e.g., 10 to 15 s), any fetal movement or maternal motion during the acquisition could affect the quality of the data acquired. Also, while STIC can compensate for some variability in the heart rate (e.g., 10%), arrhythmias beyond this range are relatively common in fetuses [112]. In such cases, the STIC algorithm may not work correctly. Thus, in spite of its promises, STIC has its own limitations. Therefore, fully sampled 2-D arrays with high volume rates might be a better option in the near future as they become more widely available. IV. COMPUTATIONAL APPROACHES Because of the real-time nature of US systems and large amount of computation and data movements that need to be performed [113], hardwired designs based on application-specic integrated circuits (ASICs) were widely used in US systems. In a hardwired system, each US task (e.g., beamforming, B-mode, color Doppler, and scan conversion) is handled by one or more hardwired boards dedicated to perform that particular task. This has limited the exibility to add new functionalities, introduce new applications, or modify the existing algorithms. One of the important paradigm shifts in US system design that started in the mid-1990s and has been continuing since then is that increasingly more functionalities of an US machine are being implemented in software using programmable processors. The development of 3-D US processing in the last decade was accelerated by this paradigm shift. At rst, programmable postprocessing boards were installed within hardwired US systems to facilitate prototyping and deployment of new algorithms and applications. For example, a programmable ultrasound image processor (PUIP) [64], which was based on two TMS320C80 MVP processors, was included with the Siemens Elegra machine and enabled the development and deployment of panaromic imaging and one of the rst native 3-D US modes commercially available, including 3scape. As more powerful processors came along, it became possible to support US back-end processing tasks [114], [115] as well as beamforming for low-end systems [116], [117] in software. This enabled quick development and integration of new algorithms or applications via software updates. If new functionality to be added demanded more computational resources than were available on the system, additional boards had to be incorporated. This is the trend 3-D US has followed. At the beginning, reconstruction and visualization were performed on a separate computer outside the US system [118], but when 3-D US became a native mode inside the US machine, it

was supported by additional hardware boards (e.g., PUIP), separate from the main US engine. While 3-D processing and visualization were handled by the separate hardware, front-end beamforming and back-end signal and image processing were performed on the main US engine. With the increases in the computational power of PCs embedded within US machines, some recent systems also started to utilize the host PC to perform 3-D processing (e.g., volume reconstruction/3-D SC, preprocess ltering, and other tasks shown in Fig. 4), sometimes with the help of additional accelerator boards for rendering. The approaches used for interactive volume rendering vary but can generally be grouped into three main categories: approaches based on specialized volume-rendering hardware, software-based approaches, and texture-mapping hardware-based approaches. The custom-designed hardware approach specically for volume rendering can typically deliver the highest performance. Much research effort went into developing these in the 1990s, and many prototypes were built [119]. A volume rendering accelerator board based on a single ASIC chip, VolumePro (TeraRecon, Inc., San Mateo, CA), was commercially developed to enable interactive visualization of large scientic and medical data on PCs. While many of the earlier attempts at designing volume-rendering hardware relied on direct ray casting accelerated through many parallel computing engines and special memory architectures, the VolumePro board (VolumePro 500) was based on the shear-warp algorithm [120]. It was eventually replaced with a second-generation architecture (VolumePro 1000) based on the shear-image-order algorithm [75] and a third-generation architecture (VolumePro 2000) based on direct ray casting. The custom-designed volume rendering boards, even though they provide high performance and image quality, are costly and do not offer much exibility in incorporating new visualization techniques. Texture-mapping hardware-based solutions rely on the texture-mapping functionality of the surface-rendering hardware, i.e., graphics processing units (GPUs). Use of GPUs is attractive because they are already available in many PCs, less costly than custom volume-rendering hardware, and handle many operations concurrently, such as bilinear or trilinear interpolations and shading calculations. They interface with fast memories and achieve a high throughput via deep pipelines and many pixel-shaders running in parallel on multiple independent pixels. While these architectures are optimized for surface rendering, volume rendering could be supported by dening a series of planes in the viewing direction parallel to the image projection plane, mapping them to 3-D texture that represents voxel colors and opacities, and compositing them onto the frame buffer [121], [122]. Lacroutes design of the shear-warp algorithm was originally intended for software-based rendering with multiple processors [123]. The main advantage of a software-based approach is that it provides exibility in adapting the rendering algorithm according to the needs of the underlying application. Although, software-based approaches initially could not deliver similar performance to that from hardware-based approaches, recent programmable media processors, such as IBM Cell Broadband Engine (Cell BE), and general-purpose processors with multimedia instruction set extensions, such as the Intel Core 2 family

KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND

35

with MMX/SSE instruction set extensions, can deliver good rendering performance. For example, Cell BE is a multicore architecture, with eight synergistic processing elements (SPEs) that are optimized for multimedia processing. A 256 256 256 8-bit volume can be rendered at 20 volumes/s using a softwarebased implementation of shear-warp volume rendering using only one of the eight SPEs [124], leaving seven other SPEs to handle other tasks (e.g., ltering and back-end signal and image processing). An important and often very compute-intensive module in 3-D processing is volume reconstruction or 3-D scan conversion. Similar to scan conversion in conventional 2-D US imaging, 3-D SC performs conversion of scanline data from acquisition coordinates (e.g., polar) to a Cartesian volume to be used in rendering and display. This 3-D SC can be implemented either directly or by separating it into two 2-D SC passes [125]. While a direct implementation of 3-D SC avoids an extra stage of resampling, its computational cost is high because of the many square-root and arctangent operations that are needed in address computation. An LUT approach is sometimes used to speed up software-based implementations of 2-D SC by precomputing input addresses and interpolation coefcients required for each output pixel. However, the required LUT size is prohibitively large in the case of 3-D SC. Separating the 3-D scan conversion into two 2-D SC passes, i.e., a series of 2-D SCs in the azimuth plane followed by a series of 2-D SCs in the elevation plane, overcomes this challenge because 2-D SCs in each pass share the same address offsets and interpolation coefcients, and thus the same small 2-D LUTs can be used. A separable approach, however, could result in lowered image quality due to two stages of resampling. However, the loss in image quality can be alleviated considerably by carefully selecting the SC parameters in both passes [125]. If the 2-D scan converter from 2-D US back-end is used, the 3-D processing engine only has to perform SC in the elevation direction. V. DISCUSSION AND CONCLUSION The introduction and continued improvements in mechanical and 2-D matrix-array transducers in the last decade have played a key role in facilitating the 3-D ultrasound and its clinical applications. While mechanical 3-D probes are relatively slower to acquire volumes, they still provide better image quality (i.e., in-plane resolution, less sidelobe artifacts, and higher sensitivity) and cost less than 2-D arrays. On the other hand, 2-D arrays are still relatively new and continue to evolve, and further improvements in image quality could be expected as the technology advances. The advantages of 2-D arrays are the high volume rates and equal focusing in both azimuth and elevation directions. In the foreseeable future, it could be expected both mechanical probes and 2-D arrays will be supported on more US systems. With the exception of cardiac applications, mechanical probes are likely to be used more than 2-D arrays initially, but this could change as and if the cost of 2-D arrays comes down. The main difculty in manufacturing the 2-D arrays come from the need to interconnect and wire each element individually. The elements in 2-D arrays are signicantly higher in number and smaller in size than 1-D arrays, which makes

interconnections even more challenging. Furthermore, to simplify 3-D beamforming, part of the beamforming circuitry is typically placed and integrated within the transducer, which further increases the complexity and cost. A different transducer construction technology, capacitive micromachined ultrasonic transducers (cMUTs) [126], has been proposed in manufacturing 2-D arrays because of several attractive features they promise [127]. Unlike the traditional lead-zirconate-titanate (PZT) arrays, which have to be individually wired, cMUTs can be produced using the standard semiconductor manufacturing process, improving the yield and making arrays with much smaller and higher number of elements feasible. The fact that they can be integrated with the electronics [128] makes them favorable in 2-D arrays, where integrated circuit boards are currently used on the transducer head to perform part of the beamforming and reduce wiring complexity. cMUT arrays also provide higher bandwidth than PZT arrays, which makes them particularly attractive for harmonic imaging and better axial resolution. cMUTs have lower sensitivity and penetration depth than PZT arrays, which needs to be overcome before their widespread adoption. Another transducer technology under research is piezeoelectric micromachined ultrasonic arrays (pMUTs) [129], which are based on thin PZT lms but micromachined similar to cMUTs. These so far have been investigated by a limited number of groups and have yet to be extensively evaluated. Three-dimensional US systems are going to benet from the availability of powerful multicore programmable processors. Some examples are IBMs Cell Broadband Engine, Intels Core processor family, and AMDs Phenom II and Opteron families. They can deliver the high performance of a multiprocessor implementation without incurring the overhead of discrete processor chips having to communicate over external interfaces. Since multiple cores on a chip can typically share the same resources, e.g., memory bus and on-chip memories/caches, each core can access the same data content without having to duplicate or forward data between separate processors, which improves the implementation efciency further. Current-generation multicore chips typically have up to eight cores. However, more cores, e.g., 32, are expected in the future (e.g., Intels Larrabee processor is expected to have 16 cores). Because 3-D visualization is typically implemented currently on a separate 3-D processing engine than the main US processing engine, US back-end signal/image processing and 3-D visualization tasks have distinct boundaries. However, as more tasks can be implemented on the same computationally powerful new-generation processors, more exibility could be allowed, and the boundary between 3-D processing and conventional back-end processing would become blurry. For example, 3-D processing could have access to ultrasound data at much earlier stages without having to rely on images optimized for 2-D display. This could change the way 2-D US signal processing is performed when the data acquired are intended for 3-D imaging. For example, separate ltering stages in 2-D and 3-D processing could be combined into one ltering stage optimized specically for 3-D visualization. Similarly, beamforming and back-end signal/image processing operations can be performed only for those voxels that are to be rendered

36

IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

in the nal image [130]. This could lead to substantial savings in computation because it avoids redundant processing of data that will not be displayed to the user. In this paper, we have summarized the main technological developments of the last decade that had a major impact on the current 3-D US systems. Compared to the status in the 1980s and 1990s, todays 3-D US systems support much easier acquisition and deliver higher image quality. However, several issues still remain. The image quality of 3-D US imaging is still inferior to that of conventional 2-D imaging. While 3-D US can offer unique views of the underlying anatomy, it is not clear how views that are diagnostically more relevant can be quickly extracted from an acquired volume for the clinician to evaluate in a timely fashion. Because of the amount of data acquired and presented to the user at the same time, it would become difcult to manually optimize the viewing parameters and also perform measurements quickly. Therefore, more automation in these is desirable and needs to be developed in the future. Siemens Syngo software, which automatically extracts the fetal features from an acquired volume, is one example of how the automation would help the clinicians in increasing the throughput or improving their workow [131]. While display technologies to enhance 3-D perception, such as stereoscopic displays, were considered in the development of 3-D US, they have not gained much momentum due to difculties involved with support and use of such displays in clinical settings. In recent years, stereoscopic display technologies, such as wearable viewers or specialized screens, have advanced considerably and are being increasingly adopted for video gaming and augmented reality applications. It remains to be seen whether 3-D US would benet from their widespread availability. In the foreseeable future, we anticipate the 3-D US being used in conjunction with 2-D imaging, i.e., as a clinical tool that augments the conventional 2-D US imaging in further conrmation of certain diagnoses or for workow improvements. Wider clinical acceptance and standalone use of 3-D US will certainly require more clinical studies showing the efcacy of this modality against 2-D routines. The current high costs of 3-D systems likely hinder wider availability of 3-D US. If clinical utility is substantiated, we could expect more machines with 3-D options, and cost could become a secondary issue. Nevertheless, reduced costs along with continued improvements in transducer manufacturing technology, computer performance, and ease of use will certainly facilitate further adoption. REFERENCES
[1] H. F. Stegall, R. F. Rushmer, and D. W. Baker, A transcutaneous ultrasonic blood-velocity meter, J. Appl. Physiol., vol. 21, pp. 707711, 1966. [2] D. W. Baker, Pulsed ultrasonic Doppler blood-ow sensing, IEEE Trans. Sonics Ultrason., vol. SU-17, pp. 170184, 1970. [3] R. C. Eggleton and K. W. Johnston, Real time mechanical scanning system compared with array techniques, in Proc. IEEE Ultrason. Symp., 1974, pp. 1618. [4] C. Kasai, K. Namekawa, A. Koyano, and R. Omoto, Real-time twodimensional blood ow imaging using an autocorrelation technique, IEEE Trans. Sonics Ultrason., vol. SU-32, pp. 458464, 1985. [5] K. E. Thomenius, Evolution of ultrasound beamformers, in Proc. IEEE Ultrason. Symp., 1996, vol. 2, pp. 16151622.

[6] D. H. Howry, G. Posakony, C. R. Cushman, and J. H. Holmes, Threedimensional and stereoscopic observation of body structures by ultrasound, J. Appl. Physiol., vol. 9, pp. 304306, 1956. [7] J. F. Brinkley, W. E. Moritz, and D. W. Baker, Ultrasonic three-dimensional imaging and volume from a series of arbitrary sector scans, Ultrasound Med. Biol., vol. 4, pp. 317327, 1978. [8] F. H. Raab, E. B. Blood, T. O. Steiner, and H. R. Jones, Magnetic position and orientation tracking system, IEEE Trans. Aerosp. Electron. Syst., vol. AES-15, pp. 709718, 1979. [9] P. H. Mills and H. Fuchs, 3-D ultrasound display using optical tracking, in Proc. 1st Conf. Vis. Biomed. Comput., 1990, pp. 490497. [10] P. E. Nikravesh, D. J. Skorton, K. B. Chandran, Y. M. Attarwala, N. Pandian, and R. E. Kerber, Computerized three-dimensional nite element reconstruction of the left ventricle from cross-sectional echocardiograms, Ultrason. Imag., vol. 6, pp. 4859, 1984. [11] J.-F. Chen, J. B. Fowlkes, P. L. Carson, and J. M. Rubin, Determination of scan-plane motion using speckle decorrelation: Theoretical considerations and initial test, Int. J. Imag. Syst. Technol., vol. 8, pp. 3844, 1997. [12] H. Brandl, A. Gritzky, and M. Haizinger, 3D ultrasound: A dedicated system, Eur. Radiol., vol. 9, pp. S331S333, 1999. [13] S. W. Smith, H. G. Pavy Jr., and O. T. von Ramm, High-speed ultrasound volumetric imaging system. I. Transducer design and beam steering, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, vol. 38, pp. 100108, 1991. [14] L. D. Platt, Three-dimensional ultrasound, 2000, Ultrasound Obstetr. Gynecol., vol. 16, pp. 295298, 2000. [15] J. M. Faure, G. Captier, M. Baumler, and P. Boulot, Sonographic assessment of normal fetal palate using three-dimensional imaging: A new technique, Ultrasound Obstetr. Gynecol., vol. 29, pp. 159165, 2007. [16] G. Pilu and M. Segata, A novel technique for visualization of the normal and cleft fetal secondary palate: Angled insonation and threedimensional ultrasound, Ultrasound Obstetr. Gynecol., vol. 29, pp. 166169, 2007. [17] T. Esser, P. Rogalla, N. Sarioglu, and K. D. Kalache, Three-dimensional ultrasonographic demonstration of agenesis of the 12th rib in a fetus with trisomy 21, Ultrasound Obstetr. Gynecol., vol. 27, pp. 714715, 2006. [18] R. Hershkovitz, Prenatal diagnosis of isolated abnormal number of ribs, Ultrasound Obstetr. Gynecol., vol. 32, pp. 506509, 2008. [19] D. Jurkovic, A. Geipel, K. Gruboeck, E. Jauniaux, M. Natucci, and S. Campbell, Three-dimensional ultrasound for the assessment of uterine anatomy and detection of congenital anomalies: A comparison with hysterosalpingography and two-dimensional sonography, Ultrasound Obstetr. Gynecol., vol. 5, pp. 233237, 1995. [20] F. Raga, F. Bonilla-Musoles, J. Blanes, and N. G. Osborne, Congenital Mullerian anomalies: Diagnostic accuracy of three-dimensional ultrasound, Fertil. Steril., vol. 65, pp. 523528, 1996. [21] T. Ghi, P. Casadio, M. Kuleva, A. M. Perrone, L. Savelli, S. Giunchi, M. C. Meriggiola, G. Gubbini, G. Pilu, C. Pelusi, and G. Pelusi, Accuracy of three-dimensional ultrasound in diagnosis and classication of congenital uterine anomalies, Fertil. Steril., vol. 92, pp. 808813, 2008. [22] L. Sugeng, P. Coon, L. Weinert, N. Jolly, G. Lammertin, J. E. Bednarz, K. Thiele, and R. M. Lang, Use of real-time 3-dimensional transthoracic echocardiography in the evaluation of mitral valve disease, J. Amer. Soc. Echocardiogr., vol. 19, pp. 413421, 2006. [23] M. Takeuchi and R. M. Lang, Three-dimensional stress testing: Volumetric acquisitions, Cardiol. Clin., vol. 25, pp. 267272, 2007. [24] R. Chaoui and K. S. Heling, New developments in fetal heart scanning: Three- and four-dimensional fetal echocardiography, Semin. Fetal Neonatal Med., vol. 10, pp. 567577, 2005. [25] L. F. Goncalves, W. Lee, J. Espinoza, and R. Romero, Three- and 4-dimensional ultrasound in obstetric practice: Does it help?, J. Ultrasound Med., vol. 24, pp. 15991624, 2005. [26] G. Valocik, O. Kamp, and C. A. Visser, Three-dimensional echocardiography in mitral valve disease, Eur. J. Echocardiogr., vol. 6, pp. 443454, 2005. [27] M. X. Xie, X. F. Wang, T. O. Cheng, Q. Lu, L. Yuan, and X. Liu, Realtime 3-dimensional echocardiography: A review of the development of the technology and its clinical application, Prog. Cardiovasc. Dis., vol. 48, pp. 209225, 2005. [28] L. F. Goncalves, W. Lee, J. Espinoza, and R. Romero, Examination of the fetal heart by four-dimensional (4-D) ultrasound with spatio-temporal image correlation (STIC), Ultrasound Obstetr. Gynecol., vol. 27, pp. 336348, 2006.

KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND

37

[29] R. C. Houck, J. E. Cooke, and E. A. Gill, Live 3D echocardiography: A replacement for traditional 2D echocardiography?, Amer. J. Roentgenol., vol. 187, pp. 10921106, 2006. [30] R. M. Lang, V. Mor-Avi, L. Sugeng, P. S. Nieman, and D. J. Sahn, Three-dimensional echocardiography: The benets of the additional dimension, J. Amer. Coll. Cardiol., vol. 48, pp. 20532069, 2006. [31] G. R. Marx and X. Su, Three-dimensional echocardiography in congenital heart disease, Cardiol. Clin., vol. 25, pp. 357365, 2007. [32] V. Mor-Avi and R. M. Lang, Three-dimensional echocardiographic evaluation of myocardial perfusion, Cardiol. Clin., vol. 25, pp. 273282, 2007. [33] B. Tutschek and D. J. Sahn, Three-dimensional echocardiography for studies of the fetal heart: Present status and future perspectives, Cardiol. Clin., vol. 25, pp. 341355, 2007. [34] D. V. Valsky and S. Yagel, Three-dimensional transperineal ultrasonography of the pelvic oor: Improving visualization for new clinical applications and better functional assessment, J. Ultrasound Med., vol. 26, pp. 13731387, 2007. [35] L. Coyne, K. Jayaprakasan, and N. Raine-Fenning, 3D ultrasound in gynecology and reproductive medicine, in Womens Health, London, U.K., 2008, vol. 4, pp. 501516. [36] H. A. G. Filho, L. L. da Costa, E. Araujo, Jr., L. M. Nardozza, P. M. Nowak, A. F. Moron, R. Mattar, and C. R. Pires, Placenta: Angiogenesis and vascular assessment through three-dimensional power Doppler ultrasonography, Arch. Gynecol. Obstetr., vol. 277, pp. 195200, 2008. [37] V. Mor-Avi, L. Sugeng, and R. M. Lang, Three-dimensional adult echocardiography: Where the hidden dimension helps, Curr. Cardiol. Rep., vol. 10, pp. 218225, 2008. [38] V. Mor-Avi, L. Sugeng, and R. M. Lang, Real-time 3-dimensional echocardiography: An integral component of the routine echocardiographic examination in adult patients?, Circulation, vol. 119, pp. 314329, 2009. [39] J. Solis, M. Sitges, R. A. Levine, and J. Hung, Three-dimensional echocardiography. New possibilities in mitral valve assessment, Rev. Esp. Cardiol., vol. 62, pp. 188198, 2009. [40] A. Fenster and D. B. Downey, 3-D ultrasound imaging: A review, IEEE Eng. Med. Biol. Mag., vol. 15, pp. 4151, 1996. [41] T. R. Nelson and D. H. Pretorius, Three-dimensional ultrasound imaging, Ultrasound Med. Biol., vol. 24, pp. 12431270, 1998. [42] G. York and Y. Kim, Ultrasound processing and computing: Review and future directions, Annu. Rev. Biomed. Eng., vol. 1, pp. 559588, 1999. [43] T. R. Nelson, D. H. Pretorius, A. Hull, M. Riccabona, M. S. Sklansky, and G. James, Sources and impact of artifacts on clinical three-dimensional ultrasound imaging, Ultrasound Obstetr. Gynecol., vol. 16, pp. 374383, 2000. [44] A. Fenster, D. B. Downey, and H. N. Cardinal, Three-dimensional ultrasound imaging, Phys. Med. Biol., vol. 46, pp. R6799, 2001. [45] E. Merz, F. Bahlmann, and G. Weber, Volume scanning in the evaluation of fetal malformations: A new dimension in prenatal diagnosis, Ultrasound Obstetr. Gynecol., vol. 5, pp. 222227, 1995. [46] O. T. von Ramm, S. W. Smith, and H. G. Pavy Jr., High-speed ultrasound volumetric imaging system. II. Parallel processing and image display, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, vol. 38, pp. 109115, 1991. [47] T. Bjastad, S. A. Aase, and H. Torp, The impact of aberration on high frame rate cardiac B-mode imaging, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, vol. 54, pp. 3241, 2007. [48] S. W. Smith, W. Lee, E. D. Light, J. T. Yen, P. Wolf, and S. Idriss, Two dimensional arrays for 3-D ultrasound imaging, in Proc. IEEE Ultrason. Symp., 2002, vol. 2, pp. 15451553. [49] Y. Hosono and Y. Yamashita, Piezoelectric ceramics with high dielectric constants for ultrasonic medical transducers, IEEE Trans. Ultrason., Ferroelectr,. Freq. Control, vol. 52, pp. 18231828, 2005. [50] T. L. Szabo and P. A. Lewin, Piezoelectric materials for imaging, J Ultrasound Med., vol. 26, pp. 283288, 2007. [51] D. H. Turnbull and F. S. Foster, Beam steering with pulsed two-dimensional transducer arrays, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, vol. 38, pp. 320333, 1991. [52] B. Savord and R. Solomon, Fully sampled matrix transducer for real time 3D ultrasonic imaging, in Proc. IEEE Ultrason. Symp., 2003, vol. 1, pp. 945953. [53] N. Pagoulatos, W. S. Edwards, D. R. Haynor, and Y. Kim, Interactive 3D registration of ultrasound and magnetic resonance images based on a magnetic position sensor, IEEE Trans. Inf. Technol. Biomed., vol. 3, pp. 278288, 1999.

[54] J. H. Kaspersen, E. Sjlie, J. Wesche, J. sland, J. Lundbom, A. degrd, F. Lindseth, and T. A. N. Hernes, Three-dimensional ultrasound-based navigation combined with preoperative CT during abdominal interventions: A feasibility study, Cardiovasc. Intervent. Radiol., vol. 26, pp. 347356, 2003. [55] H. Kawasoe, Y. Eguchi, T. Mizuta, T. Yasutake, I. Ozaki, T. Shimonishi, K. Miyazaki, T. Tamai, A. Kato, S. Kudo, and K. Fujimoto, Radiofrequency ablation with the real-time virtual sonography system for treating hepatocellular carcinoma difcult to detect by ultrasonography, J. Clin. Biochem. Nutr., vol. 40, pp. 6672, 2007. [56] L. Mercier, T. Lango, F. Lindseth, and D. L. Collins, A review of calibration techniques for freehand 3-D ultrasound systems, Ultrasound Med. Biol., vol. 31, pp. 449471, 2005. [57] G. M. Treece, R. W. Prager, A. H. Gee, and L. Berman, Correction of probe pressure artifacts in freehand 3D ultrasound, Med. Image Anal., vol. 6, pp. 199214, 2002. [58] G. M. Treece, A. H. Gee, R. W. Prager, C. J. Cash, and L. H. Berman, High-denition freehand 3-D ultrasound, Ultrasound Med. Biol., vol. 29, pp. 529546, 2003. [59] O. V. Solberg, F. Lindseth, H. Torp, R. E. Blake, and T. A. N. Hernes, Freehand 3D ultrasound reconstruction algorithmsA review, Ultrasound Med. Biol., vol. 33, pp. 9911009, 2007. [60] A. A. A. Rahni, I. Yahya, and S. M. Mustaza, 2D translation from a 6-DOF MEMS IMUs orientation for freehand 3D ultrasound scanning, in Proc. 4th Kuala Lumpur Int. Conf. Biomed. Eng., 2008, pp. 699702. [61] R. J. Housden, A. H. Gee, R. W. Prager, and G. M. Treece, Rotational motion in sensorless freehand three-dimensional ultrasound, Ultrasonics, vol. 48, pp. 412422, 2008. [62] G. R. DeVore and B. Polanko, Tomographic ultrasound imaging of the fetal heart: A new technique for identifying normal and abnormal cardiac anatomy, J. Ultrasound Med., vol. 24, pp. 16851696, 2005. [63] J. Shang, R. Managuli, and Y. Kim, Efcient arbitrary volume reslicing for pre-scan-converted volume in an ultrasound backend, in Proc. IEEE Ultrason. Symp., 2009. [64] Y. Kim, J. H. Kim, C. Basoglu, and T. C. Winter, Programmable ultrasound imaging using multimedia technologies: A next-generation ultrasound machine, IEEE Trans. Inf. Technol. Biomed., vol. 1, pp. 1929, 1997. [65] N. Max, Optical models for direct volume rendering, IEEE Trans. Vis. Comput. Graphics, vol. 1, pp. 99108, 1995. [66] M. Levoy, Display of surfaces from volume data, IEEE Comput. Graph. Appl., vol. 8, pp. 2937, 1988. [67] H. Pster, B. Lorensen, C. Bajaj, G. Kindlmann, W. Schroeder, L. S. Avila, K. M. Raghu, R. Machiraju, and L. Jinho, The transfer function bake-off, IEEE Comput. Graph. Appl., vol. 21, pp. 1622, 2001. [68] D. Honigmann, J. Ruisz, and C. Haider, Adaptive design of a global opacity transfer function for direct volume rendering of ultrasound data, in Proc. IEEE Vis. (VIS03), 2003, pp. 489496. [69] B. Petersch, M. Hadwiger, H. Hauser, and D. Honigmann, Real time computation and temporal coherence of opacity transfer functions for direct volume rendering of ultrasound data, Comput. Med. Imag. Graph., vol. 29, pp. 5363, 2005. [70] B. T. Phong, Illumination for computer generated pictures, Commun. ACM, vol. 18, pp. 311317, 1975. [71] P. Lacroute and M. Levoy, Fast volume rendering using a shear-warp factorization of the viewing transformation, in Proc. 21st Annu. Conf. Comput. Graph. Interact. Tech., 1994, pp. 451458. [72] J. Sweeney and K. Mueller, Shear-warp deluxe: The shear-warp algorithm revisited, in Proc. Symp. Data Vis., 2002, pp. 95104. [73] K. Engel, M. Kraus, and T. Ertl, High-quality pre-integrated volume rendering using hardware-accelerated pixel shading, in Proc. ACM SIGGRAPH/EUROGRAPHICS Workshop Graph. Hardware, 2001, pp. 916. [74] J. P. Schulze, M. Kraus, U. Lang, and T. Ertl, Integrating Pre-Integration Into The Shear-Warp Algorithm, in PROC. 2003 EUROGRAPHICS/IEEE TVCG Workshop Vol. Graph., 2003, pp. 109118. [75] Y. Wu, V. Bhatia, H. Lauer, and L. Seiler, Shear-image order ray casting volume rendering, in Proc. 2003 Symp. Interact. 3D Graph., 2003, pp. 152162. [76] R. Managuli, E.-H. Kim, K. Karadayi, and Y. Kim, Advanced volume rendering algorithm for real-time 3D ultrasound: Integrating pre-integration into shear-image-order algorithm, Proc. SPIE Med. Imag., vol. 6147, p. 614702, 2006. [77] G. Sakas, L.-A. Schreyer, and M. Grimm, Preprocessing and volume rendering of 3D ultrasonic data, IEEE Comput. Graph. Appl., vol. 15, pp. 4754, 1995.

38

IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

[78] C. Kim and H. Park, Preprocessing and efcient volume rendering of 3-D ultrasound image, IEICE Trans. Inf. Syst., vol. E83-D, pp. 259264, 2000. [79] A. N. Evans and M. S. Nixon, Mode ltering to reduce ultrasound speckle for feature extraction, Proc. Inst. Elect. Eng. Vision, Image Signal Process., vol. 142, pp. 8794, 1995. [80] V. Shamdasani, U. Bae, R. Managuli, and Y. Kim, Improving the visualization of 3D ultrasound data with 3D ltering, Proc. SPIE Med. Imag., vol. 5744, pp. 455461, 2005. [81] U. Bae, V. Shamdasani, R. Managuli, and Y. Kim, Fast adaptive unsharp masking with programmable mediaprocessors, J. Dig. Imag., vol. 16, pp. 230239, 2003. [82] Q. Sun, J. A. Hossack, J. Tang, and S. T. Acton, Speckle reducing anisotropic diffusion for 3D ultrasound images, Comput. Med. Imag. Graph., vol. 28, pp. 461470, 2004. [83] C. R. Castro-Pareja, O. S. Dandekar, and R. Shekhar, FPGA-based real-time anisotropic diffusion ltering of 3D ultrasound images, Proc. Real-Time Imag. IX, vol. 5671, pp. 123131, 2005. [84] M.-J. Kim, H.-J. Yun, and M.-H. Kim, Faster, more accurate diffusion ltering for fetal ultrasound volumes, Image Anal. Recognit., vol. 4142, pp. 524534, 2006. [85] L. Wang, D. Li, T. Wang, J. Lin, Y. Peng, L. Rao, and Y. Zheng, Filtering of medical ultrasonic images based on a modied anistropic diffusion equation, J. Electron. (China), vol. 24, pp. 209213, 2007. [86] Q. Huang, Y. Zheng, M. Lu, T. Wang, and S. Chen, A new adaptive interpolation algorithm for 3D ultrasound imaging with speckle reduction and edge preservation, Comput. Med. Imag. Graph., vol. 33, pp. 100110, 2009. [87] P. A. Picot, D. W. Rickey, R. Mitchell, R. N. Rankin, and A. Fenster, Three-dimensional colour Doppler imaging, Ultrasound Med. Biol., vol. 19, pp. 95104, 1993. [88] D. B. Downey and A. Fenster, Vascular imaging with a three-dimensional power Doppler system, Amer. J. Roentgenol., vol. 165, pp. 665668, 1995. [89] C. J. Ritchie, W. S. Edwards, L. A. Mack, D. R. Cyr, and Y. Kim, Three-dimensional ultrasonic angiography using power-mode Doppler, Ultrasound Med. Biol., vol. 22, pp. 277286, 1996. [90] H. Ohishi, T. Hirai, R. Yamada, S. Hirohashi, H. Uchida, H. Hashimoto, T. Jibiki, and Y. Takeuchi, Three-dimensional power Doppler sonography of tumor vascularity, J. Ultrasound Med., vol. 17, pp. 619622, 1998. [91] A. Ozdemir, H. Ozdemir, I. Maral, O. Konus, S. Yucel, and S. Isik, Differential diagnosis of solid breast lesions: Contribution of Doppler studies to mammography and gray scale imaging, J. Ultrasound Med., vol. 20, pp. 10911101, 2001. [92] R. F. Chang, S. F. Huang, W. K. Moon, Y. H. Lee, and D. R. Chen, Computer algorithm for analysing breast tumor angiogenesis using 3-D power Doppler ultrasound, Ultrasound Med. Biol., vol. 32, pp. 14991508, 2006. [93] G. L. LeCarpentier, M. A. Roubidoux, J. B. Fowlkes, J. F. Krucker, K. A. Hunt, C. Paramagul, T. D. Johnson, N. J. Thorson, K. D. Engle, and P. L. Carson, Suspicious breast lesions: Assessment of 3D Doppler US indexes for classication in a test population and fourfold cross-validation scheme, Radiology, vol. 249, pp. 463470, 2008. [94] C. Barillot, D. Lemoine, L. Le Briquer, F. Lachmann, and B. Gibaud, Data fusion in medical imaging: Merging multimodal and multipatient images, identication of structures and 3D display aspects, Eur. J. Radiol., vol. 17, pp. 2227, 1993. [95] W. Cai and G. Sakas, Data intermixing and multi-volume rendering, Comput. Graph. Forum, vol. 18, pp. 359368, 1999. [96] M. Ferre, A. Puig, and D. Tost, A framework for fusion methods and rendering techniques of multimodal volume data, Comput. Animat. Virtual Worlds, vol. 15, pp. 6377, 2004. [97] G. A. Schwartz, Three-dimensional medical ultrasonic diagnostic image of tissue texture and vasculature, U.S. 5 720 291, Feb. 24, 1998. [98] C. Deforge, D.-C. Liu, S. P. Czenszak, C. Robinson, and P. Sutcliffe, Three-dimensional tissue/ow ultrasound imaging system, U.S. 6 280 387, Aug. 28, 2001. [99] B. Petersch and D. Honigmann, Blood ow in its context: Combining 3D B-mode and color Doppler ultrasonic data, IEEE Trans. Vis. Comput. Graph., vol. 13, pp. 748757, 2007. [100] Y. M. Yoo, R. Managuli, and Y. Kim, New multi-volume rendering technique for three-dimensional power Doppler imaging, Ultrasonics, vol. 46, pp. 313322, 2007.

[101] R. Managuli, Y. M. Yoo, and Y. Kim, Multi-volume rendering for three-dimensional power doppler imaging, in Proc. IEEE Ultrason. Symp., 2005, vol. 4, pp. 20462049. [102] N. J. Raine-Fenning, K. V. Ramnarine, N. M. Nordin, and B. K. Campbell, Quantication of blood perfusion using 3D power Doppler: An in-vitro ow phantom study, in J. Phys. Conf. Series, 2004, vol. 1, pp. 181186. [103] Y. M. Yoo, S. Sikdar, K. Karadayi, O. Kolokythas, and Y. Kim, Adaptive clutter rejection for 3D color Doppler imaging: Preliminary clinical study, Ultrasound Med. Biol., vol. 34, pp. 12211231, 2008. [104] G. R. DeVore, P. Falkensammer, M. S. Sklansky, and L. D. Platt, Spatio-temporal image correlation (STIC): New technology for evaluation of the fetal heart, Ultrasound Obstetr. Gynecol., vol. 22, pp. 380387, 2003. [105] L. F. Goncalves, W. Lee, T. Chaiworapongsa, J. Espinoza, M. L. Schoen, P. Falkensammer, M. Treadwell, and R. Romero, Four-dimensional ultrasonography of the fetal heart with spatiotemporal image correlation, Amer. J. Obstetr. Gynecol., vol. 189, pp. 17921802, 2003. [106] F. Vinals, P. Poblete, and A. Giuliano, Spatio-temporal image correlation (STIC): A new tool for the prenatal screening of congenital heart defects, Ultrasound Obstetr. Gynecol., vol. 22, pp. 388394, 2003. [107] A. Schoisswohl and P. Falkensammer, Method and apparatus for obtaining a volumetric scan of a periodically moving object, U.S. 6 966 878 B2, Nov. 22, 2005. [108] A. Schoisswohl, Method and apparatus for correcting a volumetric scan of an object moving at an uneven period, U.S. 6 980 844 B2, Dec. 27, 2005. [109] R. Chaoui, J. Hoffmann, and K. S. Heling, Three-dimensional (3D) and 4D color Doppler fetal echocardiography using spatio-temporal image correlation (STIC), Ultrasound Obstetr. Gynecol., vol. 23, pp. 535545, 2004. [110] S. Yagel, D. V. Valsky, and B. Messing, Detailed assessment of fetal ventricular septal defect with 4D color Doppler ultrasound using spatio-temporal image correlation technology, Ultrasound Obstetr. Gynecol., vol. 25, pp. 9798, 2005. [111] L. F. Goncalves, J. Espinoza, J. P. Kusanovic, W. Lee, J. K. Nien, J. Santolaya-Forgas, G. Mari, M. C. Treadwell, and R. Romero, Applications of 2-dimensional matrix array for 3- and 4-dimensional examination of the fetus: A pictorial essay, J. Ultrasound Med., vol. 25, pp. 745755, 2006. [112] J. A. Copel, R.-I. Liang, K. Demasio, S. Ozeren, and C. S. Kleinman, The clinical signicance of the irregular fetal heart rhythm, Amer. J. Obstet. Gynecol., vol. 182, pp. 813819, 2000. [113] C. Basoglu, R. Managuli, G. York, and Y. Kim, Computing requirements of modern medical diagnostic ultrasound machines, Parallel Comput., vol. 24, pp. 14071431, 1998. [114] S. Sikdar, R. Managuli, L. Gong, V. Shamdasani, T. Mitake, T. Hayashi, and Y. Kim, A single mediaprocessor-based programmable ultrasound system, IEEE Trans. Inf. Technol. Biomed., vol. 7, pp. 6470, 2003. [115] V. Shamdasani, R. Managuli, S. Sikdar, and Y. Kim, Ultrasound color-ow imaging on a programmable system, IEEE Trans. Inf. Technol. Biomed., vol. 8, pp. 191199, 2004. [116] T. Fukuoka, F. K. Schneider, Y. M. Yoo, A. Agarwal, and Y. Kim, Ultrasound color Doppler imaging on a fully programmable architecture, in Proc. IEEE Ultrason. Symp., 2006, pp. 16391642. [117] H.-Y. Sohn, S.-h. Seo, J. Kim, and T.-K. Song, Software implementation of ultrasound beamforming using ADSP-TS201 DSPs, Proc. SPIE Med. Imag., vol. 6920, p. 69200Z, 2008. [118] W. S. Edwards, C. Deforge, and Y. Kim, Interactive three-dimensional ultrasound using a programmable multimedia processor, Int. J. Imag. Syst. Technol., vol. 9, pp. 442454, 1998. [119] H. Pster, Architectures for real-time volume rendering, Future Gen. Comput. Syst., vol. 15, pp. 19, 1999. [120] H. Pster, J. Hardenbergh, J. Knittel, H. Lauer, and L. Seiler, The VolumePro real-time ray-casting system, in Proc. 26th Annu. Conf. Comput. Graph. Interact. Tech., 1999, pp. 251260. [121] B. Cabral, N. Cam, and J. Foran, Accelerated volume rendering and tomographic reconstruction using texture mapping hardware, in Proc. 1994 Symp. Vol. Vis., 1994, pp. 9198. [122] A. Van Gelder and K. Kim, Direct volume rendering with shading via three-dimensional textures, in Proc. 1996 Symp. Vol. Vis., 1996, pp. 2330. [123] P. Lacroute, Real-time volume rendering on shared memory multiprocessors using the shear-warp factorization, in Proc. IEEE Symp. Parallel Render., 1995, pp. 1522.

KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND

39

[124] C. Xu, R. Managuli, and Y. Kim, Shear-warp volume rendering on Cell, in Proc. Workshop Solv. Computat. Challenges Med. Imag., Seattle, WA, 2007. [125] B. Zhuang, V. Shamdasani, S. Sikdar, R. Managuli, and Y. Kim, Real-time 3D ultrasound scan conversion using a multi-core processor, IEEE Trans. Inf. Technol. Biomed., vol. 13, pp. 571574, 2009. [126] O. Oralkan, A. S. Ergun, J. A. Johnson, M. Karaman, U. Demirci, K. Kaviani, T. H. Lee, and B. T. Khuri-Yakub, Capacitive micromachined ultrasonic transducers: Next-generation arrays for acoustic imaging?, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, vol. 49, pp. 15961610, 2002. [127] O. Oralkan, A. S. Ergun, C. Ching-Hsiang, J. A. Johnson, M. Karaman, T. H. Lee, and B. T. Khuri-Yakub, Volumetric ultrasound imaging using 2-D CMUT arrays, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, vol. 50, pp. 15811594, 2003. [128] I. O. Wygant, Z. Xuefeng, D. T. Yeh, O. Oralkan, A. S. Ergun, M. Karaman, and B. T. Khuri-Yakub, Integration of 2D CMUT arrays with front-end electronics for volumetric ultrasound imaging, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, vol. 55, pp. 327342, 2008. [129] D. E. Dausch, J. B. Castellucci, D. R. Chou, and O. T. von Ramm, Theory and operation of 2-D array piezoelectric micromachined ultrasound transducers, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, vol. 55, pp. 24842492, 2008. [130] R. E. Daigle, Ultrasound imaging system with pixel oriented processing, WIPO Patent Applicat. WO/2006/113445, Apr. 4, 2006. [131] G. Carneiro, B. Georgescu, and S. Good, Knowledge-based automated fetal biometrics 2008 [Online]. Available: http://www.medical.siemens.com/siemens/it_IT/gg_us_FBAs/les/misc_downloads/Whitepaper_AutoOB.pdf Kerem Karadayi (S98) received the B.S. degree in electrical and electronics engineering from Bilkent University, Ankara, Turkey, and the M.S. degree in electrical engineering from the University of Washington, Seattle. He is currently pursuing the Ph.D. degree at the Department of Electrical Engineering, University of Washington, Seattle. He is a Research Assistant at the Image Computing Systems Laboratory, Department of Electrical Engineering and Department of Bioengineering, University of Washington. His research interests include high-temporal resolution ultrasound imaging, 3-D/4-D ultrasound, real-time image/video computing, and computer architecture.

Ravi Managuli (S93M01) received the Ph.D. degree in electrical engineering from the University of Washington, Seattle, in 2000. He is a Senior Scientist with Hitachi Medical Systems of America, Twinsburg, OH, and Afliate Professor of bioengineering at the University of Washington. His current research interests include ultrasound algorithms and applications, high-performance computer architecture, and real-time imageand signal-processing algorithms. In addition, he is also a registered Sonographer with specialization in general ultrasound imaging.

Yongmin Kim (S78M82SM87F96) received the B.S. degree in electronics engineering from Seoul National University, Korea, and the M.S. and Ph.D. degrees in electrical engineering from the University of Wisconsin, Madison. He is a Professor of bioengineering and of electrical engineering and an Adjunct Professor of radiology and computer science and engineering at the University of Washington, Seattle. He has more than 450 publications. His group has more than 70 patents, and 26 commercial licenses have been signed. His research interests are in distributed diagnosis and home healthcare, multimedia algorithms and systems, and medical imaging. Prof. Kim received the Early Career Achievement Award from the IEEE Engineering in Medicine and Biology Society (EMBS) in 1988 and the 2003 Ho-Am Prize in Engineering. He has been a member of the Editorial Board of the PROCEEDINGS OF THE IEEE, IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, and IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE. He was President of the IEEE EMBS for 2005 and 2006.

Anda mungkin juga menyukai