Abstract: Useful inaccuracy measures and mean codeword lengths are well
known in the literature of information theorey. In this communication, a new
generalized useful inaccuracy of order and type has been proposed and
coding theorem has been established by considering the said measure and a
generalized average useful codeword length. Our motivation for studying this
is that it generalizes some results which already existing in the literature.
1. Introduction
Consider the model given below for a finite random experiment scheme hav-
ing (x1 , x2 , ..., xn ) as a complete system of events, happening with respective
probabilities P = (p1 , p2 , ..., pn ) and credited with utilities U = (u1 , u2 , ..., un ),
ui > 0, i = 1, 2, ..., n. Denote
= x1 x2 .......xn p1 p2 ......pn u1 u2.........un , (1.1)
we call (1.1) as utility information scheme.
Let Q = (q1 , q2 , ..., qn ) be the predicted distribution having the utility distri-
bution (u1 , u2 , ..., un ). Taneja and Tuteja [14] have suggested and characterized
the useful inaccuracy measure
Received: September 19, 2006 c 2006, Academic Publications Ltd.
Correspondence author
468 M.A.K. Baig, R.A. Dar
n
X
I (P ; Q; U ) = ui pi log qi . (1.2)
i=1
Taneja and Tuteja [14] derived the lower and upper bounds on L (U ) interms
of I (P ; Q; U ).
Bhatia [3] defined the useful average code lengths of order as
1
n
1 ui
Dli ( 1
) ,
X
L (U ) = 1 pi (1.4)
1 n
D 1
P
ui pi
i=1
i=1
n
P
where > 0 (6= 1) and pi 1, i = 1, 2, ..., n and D is the size of the code
i=1
alphabet. He also derived the bounds for the useful average code length of
order and is given by
P n 1
1
upq
1 i=1 i i i
I (P ; Q; U ) = 1 1 , (1.5)
n
D 1
P
ui pi
i=1
n
P
where > 0 (6= 1) and pi 0, pi 1, i = 1, 2, ..., n and D is the size of the
i=1
code alphabet.
Under the condition
n
X
pi qi1 D li 1 , (1.6)
i=1
Tuteja [11] considerd the problem of useful information measure and used it
studying the noiseless coding theorems for sources involving utilities.
In the next section, we shall study some coding theorems for a generalized
useful inaccuracy of order and type for incomplete probability distribution.
2. Coding Theorems
L (U ) I (P ; Q; U ) , (2.4)
n n
!1 n
!1
p q
xpi yiq
X X X
xi yi , (2.6)
i=1 i=1 i=1
1
1
ui
yi = pi1 n
P
qi1
ui pi
i=1
in (2.6), using
(2.3) and after making suitable operations we get (2.4) for
1
D 1 6= 0 according as 6= 1.
Theorem 2.2. For every code with lengths {li } , i = 1, 2, ..., n of Theorem
2.1, L (U ) can satisfy the inequality
1
1 1D
L (U ) < I (P ; Q; U ) D + 1 . (2.8)
D 1
of length 1. In every i , there lies exactly one positive integer li such that
ui qi
ui qi
0 < log n
li < log
n
+ 1. (2.11)
1 1
P P
ui p i q i ui pi qi
i=1 i=1
We will first show that the sequences l1 , l2 , ..., ln , thus defined satisfy (2.3).
From (2.11), we have
ui qi
log n
li ,
ui pi qi 1
P
i=1
472 M.A.K. Baig, R.A. Dar
or
ui qi li
P 1 D .
n
ui pi qi
i=1
li 1
( )< ui qi 1
D
n
D .
1
P
ui p i q i
i=1
1
References
[6] S. Guiasu, C.F. Picard, Borne inferieure dela langueur de certian codes,
C.R. Academic Sciences, Paris, 27C (1971), 248-251.
[7] D.S. Hooda, U.S. Bhaker, A generalized useful information measure and
coding theorem, Soochow Journal of Mathematics, 23, No. 1 (1997), 53-62.
[8] P. Jain, R.K. Tuteja, On coding theorem connected with useful entropy of
order , International Journal of Mathematics and Mathematical Sciences,
12, No. 1 (1989), 193-198.
[9] D.F. Kerridge, Inaccuracy and inference, Journal of Royal Statistical So-
ciety, 23B (1961), 184-194.
[15] H.C. Taneja, D.S. Hooda, R.K. Tuteja, Coding theorems on a generalized
useful information, Soochow Journal of Mathematics, 11 (1985), 123-131.
474