Anda di halaman 1dari 4

IS J AA

M.Srinivas, 2Y.Raghavender rao Lecturer, JNTUH nachupally(kondagattu), KNR. 2 Assoc.Professor, JNTUH nachupally(kondagattu), KNR. e-mail: srinuv97@gmail.com1, yraghavenderrao@gmail.com2
1 1

International Journal of Systems , Algorithms & Applications

IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA

Abstract - In this paper, a hardware solution for object tracking problem using image processing principles is provided. Object tracking is the process of detecting the moving objects of interest and plotting its trajectory analyzing them. A solution based on image histogram used to detect the object motion in the color image frames capture by the CCD camera. These captured video frames are subject to back ground subtraction principles in order to identify the moving object in the captured image. This difference frame is used to determine the displacement and the velocity information of the moving vehicles of the captured frame. Object velocity estimation of the target vehicle with in the range of 100 meters to 3 kilometers are easily identified and processed. This SoC hardware design supports the real time requirements of common video frame with more than 30 fps. A VHDL coding of the object displacement and its velocity is carried out and the simulation results will be displayed in the ModelSim wave form window. Thus the same code will be synthesized using Xilinx ISE tools targeting the virtex-4 FPGA. In order to verify the results of the hardware for its functionality, an equivalent program is written in MATLAB.

cult problem. If objects as well as the camera are moving, the resulting motion may get more complicated. The image object velocity Soc design ports are shown in Figure 2.1. This SoC is designed such away that the input is accepted only when the data_in_valid is true and output the values only when the output valid is true.

Figure2.1.Object Velocity SoC

I. INTRODUCTION Tracking a moving object in a complicated scene is a difficult problem. If objects as well as the camera are moving, the resulting motion may get more complicated. Particlefilter based approaches are employed to model the complicated tracking problem.. A general-purpose object tracking algorithm is needed in many applications: There are several applications of object tracking including video compression, driver assistance, video post-editing. Some other applications are surveillance via intelligent security cameras, perceptual user interfaces that can make use of users gestures or movements, or putting 3D tracked real object in a virtual environment in augmented reality. Objectives For our implementation, we assume that the camera is static and only the object is moving, where background difference methods are used. For our implementation of object tracking, we assume that the CCD camera position is fixed and that the vehicles are moving at a determined rate. Background subtraction principles are used to identify the motion of the object in the captured video frame. Image histogram of the input static frame is compared with the histogram successive frames in order to find there is any change in the object or motion of the vehicles. Once the motion is identified, then we compute the displacement of the object in pixels which in turn used to determine the velocity of the moving object. A SoC design which performs the above operations . II. IMPLEMENTATION Basics of Object Tracking Tracking a moving object in a complicated scene is a diffi-

Figure2.2 . Building blocks object velocity SoC

The basic building blocks of the object velocity SoC is shown in the figure 2.2. The output video lines of the CCD camera is stored in the two SRAM namely frame A and Frame B respectively. Address generation logic is coded in order to read the 8 x 8 from the SRAM.The basic principle employed in finding the vertical and horizontal histograms is accumulation. This is shown in the Figure2.3. For the vertical accumulation, it is required to add all the 8-bit pixel data in each column of the frame. For each column, this corresponds to addition of 8-bit data in all the 64 pixels. Similarly, horizontal accumulation involves addition of all the row wise addition for each row. That is, the 8-bit data of all the 64 pixels in each row are to be added. The accumulated data are stored in 1 x 64 arrays namely the Hy and Hx arrays. These are called the frame-horizontal and frame-vertical arrays. Each element in these bins is of 16 bit allowing for the maximum possible value of accumulation. For these operations to be completed in minimum time, the entire frame

Volume 2, Issue 2, February 2012, ISSN Online: 2277-2677

17

IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA

IS J AA

International Journal of Systems , Algorithms & Applications

data needs to be stored in the memory. However, this results in inefficient memory utilization. Thus, the entire process requires 64 such read-in and read-out operations. Now, the requirement for processing on 1 x 8 array data, instead of the entire 64 x 64 frame data, makes the accumulation complex. The technique used here is that as and when the 1 x 64 vector is read, it is put into the 64 vertical bins separately. Further, all these 64 pixel data are accumulated in the corresponding horizontal bin. This completes the single vector processing. Again when a new vector is read-in, it is added to the current indices of vertical bins separately. The horizontal accumulation is same as that for the first 1 x 64 vector. After 64 such operations, the complete vertical and horizontal accumulation is accomplished.

of the maxima in vertical accumulator bin (Hy) and horizontal accumulator bin (Hx) correspond to column number and row number respectively of the object of interest in the actual 2-D image. Hence, the index value is important rather than the magnitude of maximum value. This is the exact principle used in object tracking for determining the position of the object (missile) in the image. Block Processing Block processing produce the same results as processing the image all at once. In distinct block processing, distinct blocks are rectangular / square partitions that divide a matrix into m x n sections.

Figure2.3. Horizontal and Vertical Averaging

Figure2.4.Block processing of input frame

Two point Moving Averaging Filter In the process of capturing, recording, image processing and detecting the object or some combination of these, errors and noise may creep into the image. Smoothing is used primarily to diminish the effect of spurious noise and to blur the false contours pixels that may be present in a digital image. Here, calculation of the new value is based in the averaging of brightness values in 2 successive data of the bins. In this paper, to achieve parallel processing and hence, ensure highspeed operation, two moving average filters are used. Simultaneous filtering of the two bins allows for less processing time. For an array f(i) the resulting average array a(i) for 2-point moving average algorithm is f (i)/2; for i = 1 or N. a (i) =(f (i) + f (i+1))/2 Where N is maximum length of the array It is important to note that the new value is equal to half of the original at the starting and ending of the array.. These positions correspond to the border of the frame being processed. The entire processing is concentrated on a specific object that is being tracked. Therefore, it is generally taken care during the pre-processing (image segmentation or pattern identification) itself that the desired object is that sufficiently in the middle to ensure easy and accurate processing. Maxima Index Finder Simple technique for finding the maximum and its index in a given array . The values in the Hx or Hy bins correspond to the accumulated gray level intensity of the object in that row or column of the image being processed. Therefore, the indices
Volume 2, Issue 2, February 2012, ISSN Online: 2277-2677

Figure 2.4 shows one block (8 x 8) of an image frame (512x512). Pixel intensity values are indicated as integer values of an array remaining from 0 to 255. The H x and Hy are partial sum of horizontal and vertical sums of given ( 8 x 8 ) matrix. Velocity Estimation Principle A moving object is also changing location in its Region of Interest (ROI) and needs therefore also to be distinguished in every frame from a sequence of consecutive images. Once the object is segmented into a number of frames, the displacement of the object between two image frames must be extracted. The displacement of the box corresponds to the distance covered by the object in reality. The establishment of this correspondence is a problem by itself and often forces a need for calibration. When the whole object is observed in one of the frames, the difference between two consecutive frames shows two leftovers instead of one. Additional effort is then required to relate these two as belonging to a single object. Velocity Estimation Process

Figure 2.5 Object Velocity estimation

The process of velocity measurement used in this implementation depends on a number of parameters, mainly coupled to the camera in use, such as image resolution, frame frequency ff
18

IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA

IS J AA

International Journal of Systems , Algorithms & Applications

and view angle . Another parameter of importance is the distance between the camera and the moving object, dp. Given ff in frames/seconds, dp in meters and in degrees, the width of the captured scenery is da = 2dptan ( / 2). Figure 2.5 illustrates a camera where the involved parameters are pointed.An object with velocity v (meter/seconds) will cover the distance da (meters) in t = da / v (seconds). During this time, the camera takes N = t . f f = (da . f f / v) frames. In other words, if all the frames are super-imposed, there will be N instances of the moving object on a single frame. If W, in pixels, denotes the width of frames delivered by the camera, the movement of the object corresponds to a displacement in pixels given by np = W / N = (W . v) /(da . f f ). The minimum velocity that can be detected corresponds to a single pixel displacement of the object. In order to overcome this limitation, a 5% margin of the total frame-width is provided on both vertical edges. Obviously, the maximum displacement by means of pixels is correlated to the maximum object-velocity that can be detected. Typical PAL camera specifications like: horizontal view angle of 60, 720 pixel wide frames, and frame-rate of 25 frames/s are utilized in where the displacements of a 3 meter long object are shown for different speeds. Obviously, the displacement depends on the distance dp of the camera from the captured scenery in which the object moves. The size of the blob is dependent on dp as well. Such dependencies can be resolved by non-linear post-processing the blob sizes & displacement over a sequence of images. This effectively eliminates accuracy considerations from the presented algorithm. Velocity Estimation Hardware Process The difference in recording times of the two frames gives the time interval of the motion of object. Velocity of the object moving in X-direction is obtained by finding the maximum of the horizontal direction. (This step is used in our implementation, by assuming the input reference frame as complete black), similarly the velocity of object in the Y- direction is obtained by finding the maximum of the vertical direction (However this step is not utilized). Pixel displacement is computed by assuming the fixed distance between the camera axis and the moving object with the angle between the camera axis and the object always perpendicular to each other. III. RESULTS 3.1.VHDL Simulation results

Figure3.2 . Image object velocity SoC Chip pin details

3.2.MATLAB Simulation

Figure3.5. Frame 1and Frames 2 and Frame 3

Figure3.6 . Frames RED GREEN and Blue Histogram

IV. CONCLUSION The image object velocity SoC is developed using the ModelSim and Xilinx EDA tools. Image histogram, two point moving averaging filter, maxima index finder and velocity compute has been implemented successfully. In order to verify the results of this SoC design an equivalent MATLAB code is also developed. All the results of the VHDL simulation and the MATLAB are matching bit by bit. The SoC developed is synthesized targeting the Xilinx Vertex-4 FPGA in the ML404 PCI based FPGA development board. Synthesized result reveals that the SoC is able to run at a speed of 133MHz, indicates that the system is capable of processing at least 30fps of 720 x 480 NTSC frames. The image object velocity of the first two sample input frames simulated is around 111 meters per second and hence the objective of the SoC design of object velocity has been met.
Figure3.1. Velocity computation Simulated result
Volume 2, Issue 2, February 2012, ISSN Online: 2277-2677 19

IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA

IS J AA

International Journal of Systems , Algorithms & Applications

V. REFERNCES
[1] Massimo Piccardi, Background subtraction techniques: a review*, IEEE International Conference on Systems, man and cybernets 2004. [2] ChrisStaufferandW.E.LGrimson Adaptivebackground mixture models forreal-time tracking, 1999 IEEE [3] Ahmed Elgammal, David Harwood, Larry Davis, Nonparametric model for background subtraction 6th European Conference on Computer Vision. Dublin, Ireland, June/July 2000. [4] Thanart Horprasert chalidabhongse, Kyungnam Kim,David Harwood, and Larry Davis, A Perturbation Method for Evaluating Background Subtraction Algorithms available at [5] Makito Seki, Hideto Fujiwara and Kazhuhiko Sumi , A Robust Background subtraction Method for Changing Background, 2000 IEEE. [6] C. Yang, R. Duraiswami, N. Gumerov and L. Davis, Improved Fast Gauss Transform and Efficient Kernel Density Estimation, In IEEE International Conference on Computer Vision, pages 464-471, 2003.

[7] Christopher Wren, Ali Azarbayejani, Trevor Darrell, Alex Pentland, Pfinder:Real-Time Tracking of the Human Body, 1996 IEEE. [8] Chris Stauffer,and W.E.L Grimson, Adaptive background mixture models for real-time tracking, 1999 IEEE [9] Donald L. Hung, H.D.Cheng, and Savang sengkhamyong, Design of a Configurable Accelerator for Moment Computation, IEEE Transactions on VLSI systems, Vol.8, N0.6, December 2000. [10]Ahmed Elgammal, David Harwood, Larry Davis, Nonparametric model for background subtraction 6th European Conference on Computer Vision. Dublin, Ireland, June/July 2000. [11]Suleyman Malki,G.Deepak, Vincent Mohanna, Markus Ringhofer and Lambert Spaanenburg, Velocity Measurement by a Vision Sensor CIMSA 2006 IEEE International Conference on Computational Intelligence for Measurement Systems and Applications La Coruna - Spain, 12-14 July 2006.

Volume 2, Issue 2, February 2012, ISSN Online: 2277-2677

20

Anda mungkin juga menyukai