Anda di halaman 1dari 4

2012 IEEE 16th International Symposium 2012 ISCE 1569574639

Lossless Embedded Compression Algorithm with Context-based


Error Compensation for Video Application
Hyerim Jeong, Jaehyun Kim, Kyohyuk Lee, Kiwon Yoo, and Jaemoon Kim, member, IEEE

application. The overall procedure is as follows: first, the


Abstract - With the increase of image resolution in video input pixel value is differential-coded using the average value
application, the memory bandwidth is a critical problem in of the upper and left pixel values of the current one. The
video coding. An embedded compression algorithm is a resulting coding error is compensated with the proposed
technique that can compress the frame data when stored in
memory. It is possible to reduce memory requirements. In this
temporal context-based model. Finally, the coded error is
paper, we propose a lossless embedded compression algorithm entropy-coded using Significant Bit Truncation (SBT)
with context-based error compensation to reduce the memory method [2].
bandwidth requirement. Experimental results have shown at
least 50% memory bandwidth reduction on average and the II. PROPOSED ALGORITHM
data reduction ratio of the proposed algorithm is up to 5%
higher than previously proposed lossless embedded compression This paper is organized as follows: first, we describe
algorithm [2]. proposed average prediction, followed by context-based error
compensation. Finally, SBT-based entropy coding [2] that is
I. INTRODUCTION adopted in the proposed algorithm is presented.
With the increased spatial and temporal resolutions of A. Average prediction
recent video applications, memory bandwidth is a key factor The average prediction scheme used here is shown in Fig.
in system design consideration. Assuming a typical video 1. The current pixel value x is differential-coded using the
coding scenario employing 2 reference frames, a search range average value of the upper and left pixel values of the current
of [64, 64] and level-C data reuse scheme [1], the one, b and a, respectively. Where the pixels placed on the left
resultant memory bandwidth is as much as 2.4 gigabytes per or top of the random access unit are predicted by copying
second for 1080HD@60 progressive video, which incurs a horizontally or vertically and the others are predicted using
huge increase of system cost, as well as power dissipation by average value of the upper and left pixels of the current pixel.
excessive memory accesses.
To effectively resolve this problem, a method, called
embedded compression (EC), to compress the frame data just
before stored in memory has been investigated. Here, a
significant reduction of system memory requirements is
possible during transferring compressed data between a video
processor and memory.
Like general data compression approaches, EC algorithms
are classified into lossy and lossless ones. Lossy EC
algorithms guarantee a fixed compression ratio per coding Fig. 1. Proposed average prediction scheme of the random access unit is shown.
unit and are preferred by open-loop video processing that has
less effect on error propagation. On the other hand, lossless
The comparison result of average prediction with the MED
EC algorithms are necessary for closed-loop video processing,
predictor (1) used in JPEG-LS [3] is presented in Table I.
especially for video coding.
In this paper, we propose a lossless EC algorithm for video
min (a,b), if c max (a,b)
(1)
x MED
max (a,b), else if c min (a,b)
Hyerim Jeong is with the DMC R&D center, Samsung a + b c,
Electronics, Suwon, Korea (e-mail: hyerim.jeong@samsung.com). otherwise .
Jaehyun Khoim is with the DMC R&D center, Samsung
Electronics, Suwon, Korea (e-mail: jhgim@samsung.com). a + b +1
Kyohyuk Lee is with the DMC R&D center, Samsung Electronics, x AVG
(2)
Suwon, Korea (e-mail: kyohyuk.lee@samsung.com). 2
Kiwon Yoo is with the DMC R&D center, Samsung Electronics,
Suwon, Korea (e-mail: ykiwon@samsung.com). According to our experiments for HD video sequences the
Jaemoon Kim is with the DMC R&D center, Samsung average predictor (2) shows as low as 0.16 bit-per-pixel of
Electronics, Suwon, Korea (e-mail: jaemoonc.kim@samsung.com) entropy degradation. In addition, it has a simpler prediction
DMC : Digital Media Communication

978-1-4673-1356-8/12/$31.00 2012 IEEE 1


structure with no comparator and less adder, and is context = {g 1 , g 2 , g 3 }. (3)
advantageous especially in HW implementation.

TABLE I
For low complexity and storage efficiency, we quantize
COMPARISON OF THE PROPOSED AVERAGE PREDICTION WITH MED [3] FOR HD context conditions into 9 steps using T1, T2, and T3
VIDEO SEQUENCES threshold level. (T1 = 3, T2 = 7, T3 = 21). Thus, the
Prediction Entropy Arithmetic Operation/Pixel
Method
quantization regions are represented as, {0}, {1, 2}, {3, 4,
(324 block)
MED [3] 2.60 3 comparison, 1 add, 2 sub 5, 6}, {7, 8, , 20}, {21, , 255} and indexed [-4, 4].
average 2.76 2 add, 1 shift
A total of (2T + 1)3 = 729 contexts (T=4). By merging
contexts of opposite signs the total number of contexts
( )
B. Context-based Prediction Error Compensation
becomes (2T + 1)3 + 1 / 2 = 365 context conditions [3]. We
Fig. 2(a) presents the prediction error distribution. If it is the
worst case of compression performance, the prediction error call this condition CTX999.
Meanwhile, in typical error compensation using the
distribution, , is wider than the best case of that. The
proposed context-based model, coding errors are accumulated
prediction error distribution, , is more concentrated to zero according to the contexts that correspond to gradient values
using context-based error compensation in Fig. 2(b). between neighborhood pixels within a frame. In EC
algorithms, the context-based model utilization is largely
Prediction Error Distribution constrained because each small coding unit is independently
30.0% dealt with, which causes lack of statistical data for context
25.0%
accumulation. In order to circumvent this obstacle, we take
Best Compression Performance
advantage of the context-based model calculated from pixels
20.0%
Worst Compression Performance
of the previously processed frame that is the closest to the
15.0% current one, where is depicted in Fig. 3(b).
10.0%

5.0%

0.0%
-16 -8 0 8 16
Prediction Error Value

(a)

Prediction Error Distribution


20.0%
no context-based prediction

15.0%
with context-based prediction

(a)
10.0%

5.0% ...

0.0%
-16 0 16
Prediction Error Value
previously coded frame next coded frame

(b) current coded frame


(b)
Fig. 2. A prediction error distribution: (a) the prediction error distribution of
the worst and best compression performance. (b) the prediction error Fig. 3. An example of the proposed context-based error compensation scheme in
distribution, with and without context-based prediction error compensation the image samples: (a) a context model used image samples. (b) a scenario for
scheme. context-based error compensation scheme.

Fig. 3(a) illustrates the context templates in the proposed


Context modeling methods require huge memory size for
algorithm. The context of current pixel x is a function of three
recording their context conditions in spite of the superior
gradients of g 1 = d b , g 2 = b c , and g 3 = c a , defined
compression performance. To reduce the memory size for
as recording context conditions, we quantize context conditions

2
into 7 steps using T1 and T2 threshold level. A total of because of dramatically reduced context conditions.
context conditions are 172. We call this condition CTX777.
And then, we quantize context conditions into 5 steps using TABLE III
only T1 threshold level. A total of context conditions are 63. COMPRESSION PERFORMANCE OF CTX990, CTX770, AND CTX550
We call this condition CTX555. Video sequence CTX990 CTX770 CTX550
The compression performance of the CTX999, CTX777, and Aspen 2.50:1 2.46:1 2.29:1
CTX555 is presented in Table II. ControlledBurn 1.93:1 1.91:1 1.87:1

TABLE II RedKayak 2.41:1 2.40:1 2.34:1


COMPRESSION PERFORMANCE OF CTX999, CTX777, AND CTX555 RushFieldCuts 2.08:1 2.06:1 2.02:1
Video sequence CTX999 CTX777 CTX555 SnowMnt 2.05:1 2.04:1 1.95:1
Aspen 2.52:1 2.49:1 2.33:1 SpeedBag 2.65:1 2.65:1 2.64:1
ControlledBurn 1.94:1 1.92:1 1.87:1 TouchdownPass 2.16:1 2.15:1 2.14:1
RedKayak 2.40:1 2.39:1 2.36:1 WestWindEasy 2.87:1 2.83:1 2.75:1
RushFieldCuts 2.09:1 2.07:1 2.03:1 Average 2.33:1 2.31:1 2.25:1
SnowMnt 2.08:1 2.06:1 1.96:1
SpeedBag 2.65:1 2.64:1 2.63:1
TouchdownPass 2.16:1 2.15:1 2.14:1 C. Entropy Coding
WestWindEasy 2.86:1 2.84:1 2.77:1 The compensated prediction error using contexts is
Average 2.34:1 2.32:1 2.26:1 entropy-coded using Significant Bit Truncation (SBT)
method proposed in [2]. It is worth noting that the average
difference between the theoretical upper bound of the SBT
We obtain 2.34:1, 2:32:1, and 2.26:1 compression method and entropy is proven as only 0.74 bit-per-pixel. As
performance on average using the proposed temporal context. well as its simplicity, a superior coding performance is
Also, despite the reduced context conditions proposed context appropriate for EC algorithms.
scheme can obtain above 2:1 compression performance on
average.
Furthermore, we employ using only g2 and g3 as context as III. EXPERIMENTAL RESULTS
shown in Fig. 4. Here, each context is used as quantized into For performance evaluation, the proposed EC algorithm is
9, 7, and 5 steps so that the number of context conditions is integrated with a typical video coding application.
largely reduced 41, 25, and 13. We call these conditions Specifically, HEVC reference software, HM6.0 [5], is used
CTX990, CTX770, and CTX550, respectively. with such configurations of low delay coding mode. In this
case, the reference frame is compressed the proposed EC
algorithm to reduce memory requirements of the application.
First, we evaluate the performance gain of the proposed
algorithm against without context-based error compensation
scheme. For this experiment, we use a high texture sequence,
CrowdRun, and a low texture sequence, SpeedBag. A
compression ratio is described in TABLE IV.

TABLE IV
COMPRESSION RATIO OF A HIGH TEXTURE SEQUENCE AND A LOW
TEXTURE SEQUENCE , CROWD RUN AND S PEEDBAG , RESPECTIVELY

CrowdRun SpeedBag
When context-based error
1.6:1 2.6:1
compensation is used
Fig. 4. An example of the reduced context conditions in the image samples. when context-based error
1.5:1 2.6:1
compensation is not used

The compression performance of the CTX990, CTX770,


and CTX550 are presented in Table III. We also can obtain Context-based error compensation scheme is effective a
2.33:1, 2.31:1, and 2.25:1 compression performance on high texture sequence. The large prediction error of a high
average in case of CTX990, CTX770, and CTX550. And texture sequence is compensated using context.
there need small memory size to store context conditions

3
Experimental Results (QP22) Experimental Results (QP27)
70.0% 70.0%
proposed algorithm proposed algorithm
Kim's algorithm Kim's algorithm
65.0% 65.0%

DRR (Data Reduction Ratio)


DRR (Data Reduction Ratio)

60.0% 60.0%

55.0% 55.0%

50.0% 50.0%

45.0% 45.0%

(a) (b)

Experimental Results (QP32) Experimental Results (QP37)


75.0% 75.0%
proposed algorithm proposed algorithm
Kim's algorithm Kim's algorithm
70.0% 70.0%
DRR (Data Reduction Ratio)

DRR (Data Reduction Ratio)


65.0% 65.0%

60.0% 60.0%

55.0% 55.0%

50.0% 50.0%

(c) (d)

Fig. 5. Experimental results of the proposed EC algorithm compared with Kims one; (a) QP22, (b) QP27, (c) QP32, and (d) QP37.

Second, we conduct some experiments along various QP The compression performance gain of the proposed
value and compare the proposed algorithm with Kims algorithm can enhance the video coding efficiency by
algorithm [2] with 14 video sequences. Experimental results enlarging the search range of motion estimation [4] or by
illustrated in Fig. 5. reducing additional memory bandwidth for various video
According to the experimental results illustrated in Fig. 5, applications.
it can be seen that the proposed EC algorithm shows at least
50% data reduction ratio on average, which is higher than REFERENCES
Kims algorithm. Especially, in the video sequences, Johnny, [1] J.-C. Tuan, T.-S. Chang, and C.-W. Jen, "On the data reuse and memory
the compression ratio of the proposed EC algorithm is up to bandwidth analysis for full-search block-matching VLSI architecture,"
IEEE Trans. Circuits Syst. Video Technol., vol. 12, no. 1, pp. 61-72, Jan.
5% higher than that of Kims algorithm. This gain of 2002.
compression ratio comes from the proposed temporal context- [2] J. Kim and C.-M. Kyung, A Lossless Embedded Compression using
Significant Bit Truncation for HD Video Coding, IEEE Trans. Circuits
based error compensation.
Syst. Video Technol., vol. 20, no. 6, pp. 848-860, June 2010.
[3] M.J. Weinberger, G. Seroussi, and G. Sapiro, "The LOCO-I lossless
image compression algorithm: Principles and standardization into JPEG-
LS," IEEE Trans. Image Processing, vol. 9, pp. 1309-1324, Aug. 2000.
IV. CONCLUSION [4] J. Jung, J. Kim, and C.-M. Kyung, A dynamic search range algorithm
In this paper, we proposed a lossless embedded for stabilized reduction of memory traffic in video encoder, IEEE Trans.
Circuits Syst. Video Technol., vol. 20, no. 7, pp. 1041-1046, July 2010.
compression algorithm for video application. The proposed [5] HEVC Reference Software HM6.0, March 2012. [Online]. Available:
algorithm occurs in three steps: average prediction, context- http://hevc.kw.bbc.co.uk/trac/browser/tags/HM-6.0
based error compensation, and SBT coding. The average
prediction gives more low complexity compared to other
prediction method. Through the context-based error
compensation, more than 5% of data is compressed with no
quality degradation and bit-rate increment. And, we can be
implemented small memory size increase to store context
conditions through temporal contexts or largely reduced
quantized regions of context conditions.

Anda mungkin juga menyukai