0.09
0.08
0.07
e(sn)
0.06
0.05 GPU Time
Tim
12
12
12
12
12
12
56
12 2
1
x5
x5
x5
x5
x5
x5
x5
2
5
6x
0 12x
12
12
12
12
12
12
25
,5
,5
,5
,5
,5
,5
,5
19 , 5
,
K
K
8
66
92
21
24
25
26
27
30
Size (Kb)
Figure1 Original image Figure2 IDCT Figure3 Processing Time versus image size
A sample image and its inverse DCT version is shown in Figure 1 and Figure 2. As
evident from the chart displayed in Figure 3, the CPU time increases when the image size
increases. On the other hand, the increase in image size results in a negligible increase in
GPU processing time. We noticed that uploading the image from the CPU into the GPU
requires more time than the entire time used by the GPU process. In light of this we can
conclude that implementing the DCT image compression on GPU is far superior to
having it computed solely by CPU.
Our experimental results clearly indicate the GPU is much more efficient as a parallel
processor for DCT image compression. It also indicates that the increase in image size
slows the performance of the CPU while the GPU was not affected. Although we gained
considerable time by processing the DCT blocks in parallel on the GPU, we encountered
some limitations from Cg due to the fact that Cg is not developed for general purpose
programming on GPU.
References
[1]- Owens, J.D. Houston, M. Luebke, D. Green, S. Stone, J.E. Phillips, J.C., GPU
Computing, Proceedings of the IEEE, May 2008, Volume: 96, Issue: 5, pp: 879-899
[2]- Ing. Vaclav Simek, Ram Rakesh ASN, GPU Acceleration of 2D-DWT Image
Compression in MATLAB with CUDA, Proceedings 2nd UKSim European Symposium
on Computer Modelling and Simulation, Liverpool, GB, IEEE CS, p. 274-277, 2008