Anda di halaman 1dari 8

Implementing Light Field Image

Refocusing Algorithm

Wenxing Fu,
Ministry of Industry and Information Technology,
Institute of Flight Control and Simulation Technology at College of Astronautics, Northwestern
Polytechnical University,
Xi’an, China,
wenxingfu@nwpu.edu.cn

Xi Tong,
Xi’an, China,
18629578621@163.com

Chuan Shan,
Xi’an, China,
chuanbinglian@163.com

Supeng Zhu,
Xi’an, China,
supengzh@nwpu.edu.cn

Baijia Chen,
Xi’an China,
461825362@qq.com

Abstract—The light field imaging produces the I. Introduction


information on the direction of light ray, therefore
The light field imaging can obtain the 4D
the refocusing of the raw image shot with the light information on the object imaged, namely its direction
field camera can acquire high-resolution images of and distance; therefore it is worth studying and
objects at different distances. This paper presents applying. Because of its special device of micro-lens
the light field image refocusing principles, array, the light field camera has the new mechanism of
rearranges raw images, transforms virtual image shooting, focusing and then panoramic deep imaging.
planes and integrates light rays, thus obtaining ideal This makes possible the high-speed imaging, the
refocused images. Then it implements frequency object's distance measurement and other challenges.
domain refocusing and space domain refocusing and The digital refocusing of light field imaging extracts the
applies the Fourier slicing theorem to implementing 2D information on the raw image shot with the light
the frequency domain refocusing algorithm and uses field camera based on its 4D information, thus
the fast Fourier transform to enhance its efficiency. obtaining the image of different depths.
Through experiments, it compares the The study of 4D information acquisition with light
computational efficiency of the space domain field began with the plenoptic camera[1] developed by
refocusing algorithm and the frequency domain Edward H. Adelson and John Y.A. Wang in 1992 and
refocusing algorithm. It concludes that the latter has the light field rendering theory[2] proposed by Levoy in
a higher computational efficiency. 1996. In 2005, R. Ng et al simplified the design of a
plenoptic camera. The enhancement of the micro-lens'
precision helps to build the hand-size light field camera,
Keywords—light field imaging; space
which is combined with the study of the refocusing
domain refocusing; frequency domain refocusing;
algorithm, subsequently obtaining the frequency
algorithms; Fourier slicing theorum; image domain refocusing algorithm. In 2008, with the light
processing; computational frequency field camera refocusing technique[3], Xiao Xiangguo et
al incorporated the evaluation function, accomplished
focusing distance measurement, obtained the distance
978-1-4673-7519-1/15/$31.00 ©2015 IEEE
measurement scope and the precision estimates of a rays in a traditional camera that pass from the main lens
distance measurement system, but did not carry out the to a certain point in the sensor is the irradiance of the
in-depth study of the refocusing algorithm point in the image formed by the sensor, as shown in
implementation. Only a fast refocusing method can Equation (1).
obtain good distance measurement results. From 2010
to 2011, Yuan Yan performed a massive study[4] of the 1
EF ( x, y ) = ³³ LF ( x, y, u, v ) cos θ dudv
4
(1)
light field camera, analyzed its installation errors and F2
carried out its computer simulation. Zhou Yu proposed
u u
a space domain and frequency domain image resolution
evaluation method, which helps to assess the refocusing
algorithm's performance. Xu Jing studied the integrated
imaging technique and the in-depth extraction and F′
linear interpolation enhancement method[5], thus
realizing the in-depth extraction of a 3D scene. In 2012,
Zhou Zhiliang carried out the systematic study of the x′ x′
light field imaging technique in theory, design and
x′ − u
application, designed and developed the micro-lens
array light field camera but did not carry out the F
in-depth study of refocusing algorithm. In 2014, Zhou
Wenhui et al studied the Lytro camera's image
rectification and refocusing method, which is, however,
more complicated than the frequency domain x
refocusing algorithm and not effective enough. ( x′ − u ) F / F ′
We use the light field camera's structural features
and its refocusing principles to elaborate on the Fig. 2 The digital focusing of light ray
procedural steps, improvements and computational
efficiency of the space domain refocusing algorithm Here EF ( x, y ) denotes the radiance at the point of
and frequency domain refocusing algorithm. image plane ( x, y ) ; F denotes the distance between
II. The structural features of a light field camera the main lens and the image plain; LF is the light field
energy function at the light field distance; θ denotes
the angle between the light ray ( x, y, u , v ) and the
image plane; cos 4 θ denotes the attenuation factor of
halation. The light field energy function is only
associated with F; cos 4 θ is only associated with θ .
In the process of digital focusing, different image
Fig. 1 The structure of a light field camera planes are all parallel, and the θ value is fixed.
Therefore, the θ value can be simplified into the light
Through adding a micro-lens array in front of the field energy function LF , and the above equation can
traditional camera's image sensor, the light field camera
be simplified as:
realizes one-time exposure to record the light field. It is
made up of an imaging main lens, a micro-lens array in 1
good order and an ordinary sensor array. Fig. 1 gives EF ( x, y ) = ³³ LF ( x, y, u, v ) dudv (2)
the diagram for the layout of these devices. Just like an F2
ordinary camera, the main lens is arranged along the Equation (2) clearly shows that the irradiance
principal optic axis, the difference lying in the fact that intensity received by the ( x, y ) image plane is
each micro-lens in the micro-lens array covers the
sensor of the same number of pixels. For these reasons, actually the sum of all the light rays radiated from all
the light field camera can not only measure the energy the points on the plane surface of the main lens to the
of the light received by pixels but also record the ( x, y ) point on the image plane. Such is the physical
transmission paths of light rays. From the position of meaning of the integral formula.
the imaging points in the sensing region covered by the
micro-lens, we can derive the position of the light ray in A traditional camera has the ( u , v ) interval
the diagram that passes through the micro-lens as (u,v); consecutive parameter and has no way to record it. The
the position of the micro-lens through which the light light field camera, however, segments the main lens
ray passes is (x,y) in the micro-lens array, thereby into a discrete form according to its direction resolution
recording the path of the light ray from the main lens to and thus can record the direction of light ray. In order to
the micro-lens as (x,y,u,v). Therefore, the light ray can describe in a simple way how to obtain an image plane
be described with the L = L ( x, y, u, v) parameter. through digitally focusing a known image plane, we
reduce the number of dimensions of the light field plane
III. The light field imaging refocusing principles from four to a two-dimensional plane. Digital focusing
The analysis of digital refocusing from the can be understood as the change in the distance
mathematical perspective can describe the light field between the image plane and the main lens and as the
distribution more accurately. The sum of all the light imaging process of obtaining a virtual image plane with
a known image plane, as shown in Fig. 2.
If the position of a virtual image plane away from by pu ; u is the remainder of i divided by pu ;
the main lens is F ′ , then the light ray that passes y is the quotient of j divided by pv ; v is
through x also passes through the x ′ image plane.
According to the similar triangle theorem, x can be the remainder of j divided by pv .
denoted by ( x '− u ) F / F '+ u . Similarly, the change in z Equation (4) is used to change the position of the
light ray to which each pixel corresponds. Given
the y and v parameters is the same as that in the x
LF ( x, y, u, v ) , then the positions at which F ′ is
and u parameters. For the convenience of description,
if α = F ′ / F , then the same F ′ light ray can be located along the same light ray are
expressed as: x ′ = ( x − u ) × α + u and y ′ = ( y − v ) × α + v , where
the units of u and v are those of pixels.
§ x '− u y '− v · z Equations (2 through 7) are used to integrate each
L ( x, y, u, v ) = L ¨ u + ,v + , u, v ¸
F' F© α α ¹ macro-pixel of the virtual image plane to be
(3)
§ 1 x' 1 y' · obtained. Because the image pixels are discrete
= L ¨ u (1 − ) + , v(1 − ) + , u, v ¸ data sets, the integration can be expressed as:
F© α α α α ¹
1 SubHeight SubWidth
Therefore, if the image plane is changed into E(α , F ) ( x′, y ′) = 2 2 ¦ ¦ L ( x′, y ′, u, v)
α F u =0 v=0 F ′
F ' = α F , then the irradiance intensity of a certain
point ( x ', y ' ) on the image plane is as given in (5)
Equation (4): The purposes of integration are to effectively
suppress the unnecessary random interference and to
E(α ⋅F ) ( x ', y ') enhance the image refocusing visualization effects; its
disadvantage, however, is that the resolution of the
1 § § 1 · x' § 1 · y' ·
= L u 1 − ¸ + , v ¨1 − ¸ + , u , v ¸ dudv
2 ³³ F ¨ ¨
image is reduced to the number of micro-lenses. The
α F
2
© © α¹ α © α¹ α ¹ integration is to obtain the energy received by each
(4) micro-lens through superimposing the energy of the
pixels in the micro-lens.
IV. Implementing space domain refocusing
In the following, we carry out the space domain
The raw image shot by the Lytro light field camera refocusing transformation of the raw image shot by the
has 3280×3280 pixels; each micro-lens corresponds to light field camera and shown in Fig. 4.
a detector that has 10×10 pixels; the micro-lens array is
in the shape of a hexagonal honeycomb. The procedural
steps for the space domain refocusing are as follows:
z The orthogonalization of the micro-lens's image
arranges makes possible the tidy arrangement of
the macro-pixels to which each micro-lens
corresponds; the highly-exposed white image shot
by the Lytro camera is used to rearrange its raw
image with the tool kit in MATLAB. Specifically,
taking the micro-lens size as the unit, the raw
image is rearranged and realigned according to the
odd or even number of lines. The number of pixels Fig. 4 The raw image to be processed
of the raw image thus aligned is 1910×3300, and
Because a clear foreground plane is desired,
the micro-lens contains 191×330 pixels as shown
α =F ′ / F . Since the focus does not change, the smaller
in Figs. 2 through 9. The number of micro-lenses
the distance between objects, the larger the distance
determines that of pixels of the image eventually
between images is. Therefore, α should be larger
generated through refocusing.
than 1 and increase from 1 at the interval of 0.01 in 14
cases. The following are some of the typical images.

Fig. 3 The orthogonalization of raw images ˄ the left-side and


right-side images are before and after orthogonalization respectively)

z Equations (2 through 7) show that u and v do


not change. If LF ' ( x ′, y ′, u, v ) are the pixel points
of a virtual image plane, then LF ( x, y, u, v ) are
the pixel points of a real sensor plane. The 2D raw α =1
image Lraw ( i, j ) is expressed as the 4D
LF ( x, y, u, v ) ; x is the quotient of i divided
window

To analyze the refocused imaging quality, we use


various evaluation functions to evaluate the image of an
object contained in the foreground window of 70×79
pixels, as shown in Fig.6. For the purpose of
comparison, we normalize the percentage of the peak
value of the transformed data; the normalization results
are given in Fig. 7.

α = 1.02

α = 1.04 Fig. 7 The refocused image quality evaluation results

α = 1 is the result on the integration of


macro-pixels of the image shot with the light field
camera; the integration can enable the smooth transition
between micro-lens images and the removal of noised
points.
These images show that the focused plane is further
away from the sensor plane with the increasing α and
that the background plane is more and more blurred as
the foreground plane is clearer and clearer. When α
reaches close to α = 1.04 , the image is the clearest;
α = 1.06 when α continues to increase, the focused plane
continues to move forward, and the foreground plane
starts to be blurred.
V. The frequency domain refocusing principles
During the frequency domain transform of a 2D
image, to achieve the same transform effect and reduce
the computational complexity, we use the Fourier
slicing theorem to process the image. The Fourier
slicing theorem is also called the central slicing
theorem[8],and is defined as the 1D Fourier transform
function in which the projection of the g ( x, y )
α = 1.08 density function along a certain direction equals to the
gθ ( R) function. The gθ ( R) function is the value of
the G ( k x , k y ) 2D Fourier transform of the g ( x, y )
density function that passes on the (k , k )
x y plane
along the straight line of original points in the same
direction. As shown in Fig. 8, the digital refocusing of
the image shot with the light field camera is actually the
2D slicing of the 4D light field and therefore the
application of the generalized Fourier slicing theorem.

α = 1.10
Fig. 5 The improved refocusing results

Fig. 6 The contrast of α = [1,1.01,1.02," ,1.10] foreground


f ( X ) function, as shown in Equation (10):
G2(x, y)
F2
Γ 2 (u , v )
α [ f ]( X ) = α f ( X ) (10)
L12 ⋅ B S12 ⋅ Rθ E. The Fourier transform operator
Defining F N as the Fourier transform of N
F1
g ( x′) ϕ (u ′)
number of dimensions and F − N as its inverse
transform, as shown in Equation (11):
F N [ f ] (U ) = ³ f ( X ) exp ( −2π i ( X ⋅ U ) ) dX (11)
Fig.8 The generalized Fourier slicing theorem (containing the
mathematical notations used in this paper)
Where X and U have the vectors of N
In 1997, on the basis of 2D transform, Liang and number of dimensions.
Munson defined the generalized Fourier slicing
Ren.Ng is the earliest researcher who applied the
theorem[9], namely defining a f ( X ) function that generalized Fourier slicing theorem to the frequency
has the N number of dimensions. They first used the domain refocusing of a light field image[10]. In his
matrix transform operator to change the structure of the paper, he defined an imaging operator Pα , which
function, then reduced the function to the M number of enables the camera with the F parameter (namely the
dimensions with the projection operator and finally distance between the micro-lens array and the main lens)
performed the Fourier transform of it with the Fourier to form an image on the α F virtual image plane, as
transform operator. As a result, they carried out the shown in Equation (12):
Fourier transform of the function, then found the
inverse solution and finally sliced it with the slicing 1
operator to reduce it to the M number of dimensions; Pα [ LF ] = L4 ⋅ B [ L ] (12)
α F2 2 α F
2
the results thus obtained are given in Equation (6).
The purpose of the operator is to achieve digital
M N N B −T N focusing and extract a 2D image in the 4D light field.
F ⋅L ⋅B = S ⋅
M M ⋅F (6)
B −T The projection from the 4D image to the 2D image is
expressed with the L42 integral operator. Here, the
Such is the generalized Fourier slicing theorem. Its
operators are defined as follows˖
transform between the vectors ( x ', y ', u, v ) and the
vectors ( x, y , u , v ) is as follows:
A. The integral operator
Defining the LNM integral operator as the function ª x ' º ªα 0 1 − α 0 º ª xº
whose number of dimensions is reduced from N to M « y '» « 0 α 0 1 − α »» «« y »»
« »=«
through integrating the variables that have the N-M « u » «0 0 1 0 » «u »
numbers of dimensions, as shown in Equation (7): « » « »« »
¬ v ¼ ¬0 0 0 1 ¼ ¬v¼
LNM [ f ] ( x1 ,..., xM ) = ³ f ( x1 ,..., xN ) dxM +1 ...dxN (7)

B. The slicing operator The matrix transform operator Ba is thus obtained:

Defining the SMN slicing operator as the function ªα 0 1 − α 0 º


whose N number of dimensions is reduced to M «0 α 0 1 − α »»
through changing the N-M numbers of dimensions into Bα = «
zero, as given in Equation (8): «0 0 1 0 »
« »
¬0 0 0 1 ¼
SMN [ f ] ( x1 ,..., xM ) = f ( x1 ,..., xM , 0,..., 0 ) (8)
The above equation can be expressed as:
C. The matrix transform operator
1
Defining the B matrix transform operator as a Pα [ LF ] = F −2 ⋅ ( F 2 ⋅ L42 ⋅ Bα [ LF ])
matrix that has N×N number of dimensions. B −1 is α F2
2

the inverse matrix of the B matrix. When the X variable The generalized Fourier slicing theorem produces:
of the f ( X ) function is a vector of N number of
dimensions, Equation (9) is valid: 1 § Bα −T ·
Pa [ LF ] = F −2
⋅ ¨ S 4
⋅ ⋅F4 ¸
2 2
a F ¨ 2
Bα − T ¸
B [ f ] ( X ) = f ( B −1 X ) (9) © ¹

Thus, the matrix of a matrix function can be Bα −T = Bα −1 = 1 / α 2 also produces:


transformed by defining the B matrix.
1 −2 4
D. The zooming operator Pa [ LF ] = F ⋅ S2 ⋅ Ba −T ⋅ F 4 (13)
F2
Defining the α zooming operator. The f ( X )
Equation (13) clearly shows that the acquisition of
function is zoomed by multiplying α with the the 2D focused image from the 4D light field image
requires the Fourier frequency domain refocusing transformed image; LF ( x, y, u , v) is the irradiance of
processes: the matrix transform, slicing and then the raw image; the size of the raw image is
zooming. K ×O× M × N ; the parameters
The frequency domain refocusing processes are u = 0,1, 2," , M − 1 , v = 0,1, 2, L, N -1 ; the parameters
described by defining the Cα operator: x = 0,1, 2," , K − 1 , y = 0,1, 2," , O − 1 .

1 The form of the 2D discrete Fourier transform is:


Cα = ⋅ S24 ⋅ Ba −T (14)
F2 M −1 N −1
F (u , v ) = ¦ ¦ f ( x, y )e − j 2π ( ux / M + vy / N )
The introduction of the G function to be sliced x =0 y=0

produces: The size of f ( x, y ) is the M × N digital image,


1 where x = 0,1, 2," , M − 1 , y = 0,1, 2," , N − 1
Cα [G ] ( k x , k y ) =
F2
(
[G ] Bα T ( k x , k y , 0, 0 ) )
The DFT is separable and can represent the 2D
The introduction of the above into the Bα matrix discrete Fourier transform with the 1D Fourier
produces: transform as follows:
M −1 N −1
F (u , v) = ¦ e− j 2π ux / M ¦ f ( x, y )e − j 2π vy / N
1 x =0 y=0
Cα [G ] ( k x , k y ) = [G ] (α k x , α k y , (1 − α )k x , (1 − α )k y ) M −1
(16)
F2 = ¦ F ( x, v)e − j 2π ux / M

(15) x =0

The Fourier transform of both 4D and 2D images Where


can be achieved by using the fast Fourier transform N −1
algorithm, as shown in Fig. 9 . Equation (15) shows F ( x, v ) = ¦ f ( x, y )e − j 2π vy / N (17)
x=0
that the slicing with the Fourier transform in the
frequency domain involves the extraction and For each value of x and v = 0,1, 2," , N − 1 , it is
rearrangement of pixels of an image in the Fourier obvious that F ( x, v) is the 1D DFT of a line in
transformed frequency domain that corresponds to the
4D image in the light field. This has a much smaller f ( x, y ) . All the lines in f ( x, y ) are calculated into
computational complexity than the integral projection the 1D DFT with Equation (16), where x changes
and is a highly efficient and easily implemented from 0 to M − 1 . Similarly, all the columns in F ( x, v)
algorithm. are calculated into one 1D DFT with Equation (17).
Thus, we can reach the conclusion that the DFT with
multiple numbers of dimensions can be simplified into
performing the 1D DFT of all the parameters along the
line or column of an image.
Implementing the DFT directly involves a large
LF ΓF
Pα volume of work. Taking the 2D DFT as an example,
one time of the 2D DFT needs approximately ( MN ) 2
F 4
F −2 times of summation and addition. One time of the DFT

of an image whose resolution is medium-sized and has,
Eα⋅F ψα⋅F for example, 1024 × 1024 pixels requires such a big
magnitude of trillions of times of multiplication and
addition. This can not enhance computational efficiency
with frequency domains, while the fast Fourier
transform (FFT) can do so. The 2D FFT of an image
that has the same 1024 × 1024 pixels requires only
Fig. 9 The frequency domain refocusing 20,000,000 times of multiplication and addition.
We implement the frequency domain digital
VI. Implementing frequency domain refocusing refocusing of the following raw images shot with the
Because the light field image consists of discrete Lytro light field camera:
signals, to implement frequency domain digital
refocusing, it is necessary to understand the discrete
Fourier transform(DFT). Firstly, Equation (12) shows
that it is imperative to perform the 4D discrete Fourier
transform of the raw image, whose form is:
F ( x′, y ′, u ′, v′)
K −1 Q −1 M −1 N −1
= ¦¦ ¦ ¦ LF ( x, y, u , v)e − j 2π ( xx ′ / K + yy ′ / O + uu ′ / M + vv′ / N )
x =0 y = 0 u = 0 v = 0

Where F ( x′, y ′, u ′, v′) is the irradiance of the


Fig. 10 The raw image
To carry out the 4D DFT, first, we perform the 1D
Fourier transform of the column of each macro-pixel,
namely the u parameter, and the line of each
macro-pixel, namely the v parameter. Then, we take
one pixel at the same position of each macro-pixel and
use the macro-pixel as the unit to perform the 1D
Fourier transform of x , or one column of pixels each
time. Finally, we use the macro-pixel as the unit to
perform the 1D Fourier transform of y , or one line of
pixels each time. The results thus obtained are the data
in their plural form F (k x , k y , ku , kv ) . The observation
of the transformed image needs to move the data to the
center and use their absolute values, as shown in Fig. 11;
the partially amplified image is shown in Fig. 12.

Fig. 11 The image after the 4D discrete Fourier transform

Fig. 12 The partially amplified image

Then, we use Equation (15) to slice the data in their


plural form F (k x , k y , ku , kv ) into two dimensions, thus
obtaining G (k x′ , k y′ ) . Because k x′ , k y′ in G (k x′ , k y′ )
corresponds to k x , k y in F (k x , k y , ku , kv ) , The
resolution of the sliced image is the number of
macro-pixels. This is consistent with the space domain
integral refocusing results, and the refocusing result on
the image shot with the Lytro light field camera is
191× 330 .
Finally, we perform the 2D inverse Fourier
transform and obtain the refocusing results: the focused
image plane varies with the different values of α .
Because the 1 / F 2 in Equation (15) has a fixed value
and affects only the shade of the whole image and
therefore can be neglected during the transform. The
transform results, given in Fig. 13, show that as α
diminishes, the clear regions of the images change
slowly from close shot into long shot images. The six
images clearly show the frequency domain refocusing
effects.

Fig. 13 The frequency domain refocusing effects

VII. Analyzing the computational efficiency of the


light field image refocusing algorithm
The interrelationship between the space domain
refocusing algorithm and the frequency domain
refocusing algorithm are shown in Fig. 13. The space
domain refocusing algorithm uses integral projection: refocusing algorithm is higher than that of the space
First of all, it performs the 4D integration of the raw domain refocusing algorithm.
image and then multiplies the transform matrix
determined by α . The frequency domain refocusing VIII. Conclusion
algorithm, however, performs the 4D discrete Fourier This paper presents the structural features of the
transform of the raw image, then slices it in the 2D light field camera and analyzes the light field image
frequency domain and finally performs the 2D inverse refocusing principles. It uses the refocusing algorithm
Fourier transform, thus obtaining the same results as the developed with the MATLAB to implement the space
space domain refocusing algorithm. domain refocusing and the frequency domain
refocusing. It also uses the raw image shot with the
The time complexity of the integral projection of
Lytro light field camera to do refocusing experiments
the space domain refocusing algorithm is O(n 4 ) , so as to verify the space domain refocusing algorithm
while that of the 4D discrete Fourier transform in the and the frequency domain refocusing algorithm. Finally,
frequency domain refocusing algorithm is O(n 4 log n) . it compares the computational efficiency of the two
types of refocusing algorithms. It concludes that the
The 2D slice projection is O(n 2 ) , and the time
computational efficiency of the frequency domain
complexity of the 2D inverse Fourier transform is refocusing algorithm is higher.
O(n 2 log n) . This shows that the computational
complexity of the frequency domain refocusing
algorithm is evidently less than that of the space
domain refocusing algorithm. References
[1] Adelson E H, John Wang Y A. Single Lens Video with a
The computer which is configured with the Intel Plenoptic Camera[J]. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 1992, 14(2): 99-106.
i7-4710H processor and the 8G memory is used to
execute the refocusing algorithm developed with [2] Levoy M, Hanrahan P. Light Field Rendering[C]
MATLAB compare the computational efficiency of the //Proceedings of the 23rd Annual Conference on Computer
Graphics and Interactive Techniques. ACM, 1996: 31-42.
two types of refocusing algorithm. All the light field
images processed in the following experiments are shot [3] Xiao Xiangguo, Wang Zhonghou, Sun Chuandong et al. A
Focusing Distance Measurement Technology Based on Light
with the Lytro light field camera.
Field Photography[J]. Acta Photonica Sinica, 2008, 37˄12˅:
Experiment 1: The two types of refocusing 2539-2543
algorithm are executed respectively by using the same [4] Yuan Yan, Zhou U, Hu Huanghua. Analyzing Registration
raw image but different values of α . The execution Errors in Light Field Camera's Micro-Lens Array and
Detector[J]. Acta Photonica Sinica, 2010, 39(1): 123-126.
time of the frequency domain refocusing algorithm
basically does not vary; its average computation time is [5] Xu Jing. Integrated Imaging and Light Field Imaging Based
9.5570 seconds, and the variation is 0.01 order of on Micro-Lens Array[D]. University of Science and
Technology of China, 2011.
magnitude. The average computation time of the space
domain refocusing algorithm is 117.0110 seconds, and [6] Zhou Zhiliang. Research on Light Field Imaging
the variation is 0.01 order of magnitude. Technology[D], University of Science and Technology of
China, 2012,5.
Experiment 2: The two types of refocusing [7] Zhou Wenhui, Lin Lili. Lytro Light Field Camera's Image
algorithm are executed by using different raw images Rectification and Refocusing Method[J], Chinese Journal of
but the same values of α . Their execution time does Images and Graphics, 2014,19(7)˖1006-1011
not vary, and their average computation time is 9.55 [8] Deans, S.R. The Random Transform and Some of Its
and 117.1 seconds respectively. Applications. Wiley-Interscience, 1983

Experimental results: Different values of α do not [9] Liang, Z.P, Munson, D.C. Partial Random Transforms. IEEE
Transactions on Image Processing 1997 10(6):61-69
affect the execution time of the two types of refocusing
algorithm. When the same refocusing algorithm [10] Ng R. Fourier. Slice Photography[C]//ACM Transactions on
processes the same raw image shot with the Lytro light Graphics (TOG). ACM, 2005, 24(3): 735-744.
field camera, its computing time does not change. The
computational efficiency of the frequency domain

Anda mungkin juga menyukai