Anda di halaman 1dari 19

IJITE

Vol.01 Issue-07, (December, 2013)

ISSN: 23211776

ALGEBRAIC AND ANALYTIC PROPERTIES OF HR(P)


*Dr Meenakshi Darolia

In this paper, first we give the definition of R-norm information measure and then
discuss the algebraic and analytic properties of the Rnorm information measure
H R (P ) and also it will be summarized in the following two theorems.

First we introduce some notations for convenience. We shall often refer to the set
of positive real numbers, not equal to 1. We denote this set by R+ with
R + = {R; R > 0, R 1}

We also define

as the set of all n-ary probability distributions P = ( p1, p2 ,

p3., pn ) which satisfy the conditions:


n

pi 0 ,

pi = 1
i =1

DEFINITION: The R-norm information of the distribution P is defined for R

R
H R (P ) =
1
R 1

R
i

1
R

i =1

*Assistant Professor, Isharjyot Degree College Pehova

International Journal in IT and Engineering


http://www.ijmr.net

R+

IJITE

Vol.01 Issue-07, (December, 2013)

ISSN: 23211776

The R-norm information measure (2.1) is a real function n R + defined on


n where n 2 and

R+ is the set of positive real numbers .This measure is

different from Shannons entropy, Renyi and Havrda and Charvat and Daroczy .
ALGEBRAIC AND ANALYTIC PROPERTIES OF HR(P) :
The algebraic and analytic properties of the Rnorm information measure H R (P )
will be summarized in the following two theorems. First we consider the algebraic
properties of the R-norm information measure.
Theorem 1: The R-norm information measure HR(P) has the following
Algebraic properties:
1. H R (P ) = H R ( p1 , p 2 , ... , p n ) is a symmetric function of ( p1 , p 2 , ... , p n ) .
2. H R (P ) is no-expansible i.e. H R ( p1 , p 2 , ... , p n 0 , 0 ) = H R ( p1 , p 2 , ... , p n 0 )
3. H R (P ) is decisive i.e. H R (1, 0) = H R (0, 1) = 0 .
4. H R (P ) is non-recursive.
5. H R (P ) is pseudoadditive, i.e. if P and Q are independent then
H R (P, Q ) = H R (P ) + H R (Q )

R 1
H R (P )H R (Q ) i.e. H R (P ) is non-additive.
R

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

ISSN: 23211776

Proof:
(1) To prove H R (P ) = H R ( p1 , p 2 , ... , p n ) is a symmetric function of p1 , p2., pn .
By definition (2.1), we have

R
H R (P ) =
1
R 1

piR

1
R

i =1

R
R
R
=
1 p1 + p 2 ....... + p
n
R 1

1
R R

piR

Since

is a symmetric relation in pis.

i =1

i.e. p1R + P2R++ pnR is same if p1, p2 , ,pn are changed in cyclic order.

R
H R (P ) =
1
R 1

piR

1
R

is a symmetric function.

i =1

Hence completes the proof.


(2) To prove H R ( p1 , p 2 , ... , p n 0 , 0) = H R ( p1 , p 2 , ... , p n 0 )
To prove this let us consider
H R ( p1 , p 2 , ... , p n 0 , 0 ) =

R
R
R
R
1 p1 + p 2 ....... + p n 0 + o R
R 1

1
R

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

R
R
R
R
=
1 p1 + p 2 ....... + p n 0
R 1

n0

R
1
R 1

piR

ISSN: 23211776

1
R

1
R

i =1

= H R ( p1 , p 2 , ... , p n 0 )

(3) To prove H R (P ) is decisive i.e. H R (1, 0) = H R (0, 1) = 0 .


By definition, we have
R
H R (P ) =
1
R 1

1
R

piR
i =1

Now If consider only two events,


H R (P ) =

R
R
1 p1R + p 2
R 1

1
R

And if we take

p1=1 , p2=0 ,then (2.3) becomes

R
H R (P ) =
1 1R + 0
R 1

1
R R

R
[1 1
R 1

]= o

H R (1, 0) = 0

Similarly we can proof that H R (0, 1) = 0


(4) To prove H R (P ) is non-recursive, we have to prove that
H R ( p1 + p 2 , p3 , ..., p n ) + ( p1 + p 2 )H R

p1
p2
,
H R ( p1 , p 2 , ..., p n )
p1 + p 2 p1 + p 2

For this first we consider


International Journal in IT and Engineering
http://www.ijmr.net

IJITE

HR

Vol.01 Issue-07, (December, 2013)


R
1

R
2

p1
p2
p +p
R
,
=
1
p1 + p 2 p1 + p 2
R 1
p1 + p 2

ISSN: 23211776

1
R

Multiply both sides by ( p1 + p 2 ) in (2.4), we get

( p1 + p 2 ) H R

R
1

R
2

p1
p2
p +p
R
,
1
= ( p1 + p 2 )
p1 + p 2 p1 + p 2
R 1
p1 + p 2

H R ( p1 + p 2 , p3 , ..., p n )

R
R
=
1 ( p1 + p3 ) + p3R + ... + p nR
R 1

1
R

1
R

By combining 2, we have
H R ( p1 + p 2 , p3 , ..., p n ) + ( p1 + p 2 )H R

p1
p2
,
H R ( p1 , p 2 , ..., p n )
p1 + p 2 p1 + p 2

Thus H R ( p1 , p 2 , p 3 , ..., p n ) is non-recursive.


(5) To prove H R (P, Q ) = H R (P ) + H R (Q )

R 1
H R (P )H R (Q )
R

i.e. H R (P ) is non-additive
Proof: Let A1 , A2 , ..., An and B1 , B2 , ..., Bm be the two sets of events associated with
probability distributions P n and Q m .We denote the probability of the joint
occurrence of events

Ai = (i = 1, 2, ..., n ) and B j = ( j = 1, 2, ..., m ) on p (Ai B j ) .

Then the R-norm information is given by


International Journal in IT and Engineering
http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)


R
1
R 1

H R (P * Q ) =

Since the events

i =1

j =1

considered

H R (P * Q ) =

have

R
1
R 1

1
R

here are stochastically independent therefore, We


n

R
1
R 1

piR ( Ai Bi )

ISSN: 23211776

piR ( Ai )

1
R

i =1

piR

p Rj (B j )

1
R

j =1

1
R

i =1

p Rj

1
R

j =1

R
R
R 1
R 1

1
H R (P ) 1
H R (Q )
R 1 R 1
R
R

= H R (P ) + H R (Q )

R 1
H R (P )H R (Q )
R

Theorem 2: Let H R (P ) = H R ( p1 , p 2 , ..., p n ) be the R-norm information measure.


Then for P n and R R + we find
(1)

H R (P ) Non-negative.

(2)

H R (P ) H R (1, 0, 0, ... , 0) = 0 .

(3)

H R (P ) H R

1 1
1
R
, ,...., =
1 n
n n
n
R 1

1 R
R

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

(4)

H R (P ) is a monotonic function of P.

(5)

H R (P ) is continuous at R R + .

(6)

H R (P ) is stable in pi , i =1, 2, ... , n .

(7)

H R (P ) is small for small probabilities.

(8)

H R (P ) is a concave function for all p i .

(9)

ISSN: 23211776

lim H R (P ) = 1 max . pi .

Proof:
(1) To prove that H R (P ) > 0, we consider the following two cases:
n

Case I: When R > 1 , then piR pi i

p iR
i =1

pi = 1
i =1

1
R

pi

<1

pi

1
R

i =1

i =1

1
R

pi
i =1

We know that if R >1, then

R
>0
R 1

Multiplying both sides of (2.9) by

R
we get
R 1

R
1
R 1

R
i

1
R

i =1

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)


n

R
1
R 1

But

piR

1
R

ISSN: 23211776

= H R (P )

i =1

R
H R (P ) =
1
R 1

1
R

p iR

for R >1

i =1

Case II: When 0 < R < 1 , then piR pi i

n
R
i

pi = 1 .

p
i =1

pi

i =1

1.

piR

1
R

We know

i =1

Multiplying both sides of (2.10) by


n
i =1

1
R

R
1
R 1

piR

R
i

1
R

1
R

i =1

<0

R
1
R 1

piR

i =1

1
R

R
< 0 if 0 < R < 1
R 1

R
, we get
R 1

0 But

= H R (P )

Thus we have

i =1

R
H R (P ) =
1
R 1

piR

1
R

i =1

Hence we conclude that H R (P ) is non- negative R R+


(2) To prove

H R (1,0,0,0,0,0) = 0

i.e. if one of the probability is equal to 1 and all others are equal to zero, then

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

ISSN: 23211776

H R (P ) = 0 By definition, we have

H R (P ) =

R
1
R 1

Then

piR

1
R

And if we take p1=1 and p2 = p3 = p4.= pn , then

i =1

= p1 + P2 ++ pn = 1,

pi

pi

i =1

pi
i =1

1
R

=0 ,

1
R

=1

i =1

R
1
R 1

piR

1
R

= 0 When p1=1 and p2 = p3 = p4...= pn = 0

i =1

H R (1,0,0,0,0,0) = 0

(3) To prove H R (P ) H R

1 1
1
R
, ,...., =
1 n
n n
n
R 1

By definition (2.1), we have

H R (P ) =

R
1
R 1

1 R
R

p iR

1
R

i =1

1
n

And if we take p1 = p2 = p3 = p4 .= pn = , then becomes

HR

1 1
1
R
1
, ,...... =
1 n
n n
n R 1
n

HR

1
R R

1 1
1
1
R
, ,...... =
1 nR
n n
n
R 1
n

1
R

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

R
=
1 n
R 1

1 R
R

ISSN: 23211776

Now we prove

H R (P ) H R

1 1
1
, ,....,
n n
n

Then by Lagrange multipliers, we have

pi

1
R

n (1 R ) / R

for R >1

n (1 R ) / R

for 0 < R < 1

And

i =1

pi

1
R

i =1

Equality holds iff pi=1/n for all i=1,2,,n .


Substituting the results obtained in into definition and noting that R/R-1>0 for R >
1 and R/R-1< 0 for 0 < R <1 completes the property (3)Here it is noted that the
R-norm information measure is maximal if all probabilities are equal and minimal
if one probability is equal to unity and all others are equal to zero.
(4) H R (P ) is monotonic iff H R ( p, 1 p ) is non-decreasing on p 0,
By definition (2.1), we have
H R ( p, 1 p ) =

R
R
1 (1 p ) + p R
R 1

1
R

Let us define the function G (p) by


International Journal in IT and Engineering
http://www.ijmr.net

1
.
2

IJITE

Vol.01 Issue-07, (December, 2013)

G ( p ) = 1 (1 p ) + p R
R

ISSN: 23211776

1
R

Differentiate (2.17) w.r.t p we get


d G( p )
1
= 0 (1 p ) R + p R
dp
R

Now when R > 1

1
(1 p ) R + p R
R

1 R
R

] [ R(1 p)

R 1

+ R ( p ) R 1

1
1 < 0
R
1 R
R

] [ R(1 p)

R 1

+ R ( p ) R 1 0

d G( p )
0 for R > 1
dp

From (2.16), we note that


d
R d G( p )
H R ( p, 1 p ) =
dp
R 1 d p

1
d
H R ( p, 1 p ) 0 for R > 1, p 0,
2
dp

Similarly we prove

d G( p )
1
0 for 0 < R < 1 and p 0,
dp
2

And when 0 < R < 1, then

R
<0
R 1

R d G( p )
0
R 1 d p

we have

d
R d G( p )
H R ( p, 1 p ) =
dp
R 1 d p

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

ISSN: 23211776

From (2.20), we have


d
H R ( p, 1 p ) 0
dp

Thus H R ( p, 1 p ) is a non-decreasing function and hence monotonic function.


1
R

p iR

(5) We know that

is continuous for R [0, ) .

i =1

R
1
Hence, H R ( p ) =
R 1

pi

is also continuous at R R +

i =1

(6) H R ( p ) is stable in pi , i =1, 2, ... , n .

We know that H R ( p ) is expansible.


i.e. H R ( p, 0) = H R ( p )
Thus by definition (2.1), we have
H R ( p; q ) =

R
1 pR + qR
R 1

Lt q 0+ H R ( p ; q ) = Lt q 0=

1
R

R
1 pR + qR
R 1

1
R

= H R ( p,0 + ) = H R ( p,0)

Together with (2.21), it follows that HR(p) is stable in pi.


(7) From (2.16), it follows that
Lt q o + H R ( p, q ) = Lt q o +

R
1 pR + qR
R 1

1
R

=0

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

ISSN: 23211776

This proves that H R ( p ) is small for small probabilities.


(8)To prove H R ( p ) concave first we define the concave function.
DEFINITION: A function f over a set S is said to be concave if for all choices of
m

x1 , x 2 , ..., x m S

and for all scalars 1 , 2 , ..., m such that i 0,

i = 1 , the
i =1

following holders
m

i f ( x i )

i x i

f
i =1

(2.23)

i =1

Here we consider random variable x taking its values in the set S and r probability
distributions over S on follows:
Pk ( x ) = {p k ( x1 ), ... , p k ( x m )} : p k ( xi ) 0,

p k ( xi ) = 1 ,k = 1, 2,........,r

i =1

Let us define another probability distribution over S


P0 ( x ) = {p 0 ( x1 ), . . . , p 0 ( x m )} i s
P0 ( xi ) =

k p k ( xi ) , where k s are non-negative scalars satisfying

k =1

k = 1
k =1

then We have

D=

k H R (Pk ) H R (P0 ), R(> 0) 1

k =1

H R (P ) will be concave if D is less than zero for R( > 0) 1. So we consider


r

D=

k H R (Pk ) H R (P0 )

k =1

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)


r

D=

D=

p kR ( xi )

k 1
k =1

1
R

R
R

1
R 1 R 1

i =1

R
R 1

i =1

1
R

k p k ( xi )

r
k =1

Now using the inequality

k p k ( xi )

k =1

i =1

r
>
<

k =1

<

i =1

i =1 k =1

1
R

k k

>
<

k =1

k =1

a k x kt according as t <> 1 , we have

PR x

<

k p k ( xi )

D1 =

>

k =1

k p kR (xi ) According as R <> 1 .Therefore

k p k ( xi )

p kR ( xi )

i =1

>

ak xk
k =1

1
R

k =1

p 0R ( xi )

1
R

i =1

ISSN: 23211776

k
k =1

according as

p kR (xi )

1
R

i =1

<
>

1,

according as R <> 1 . (2.24)

Moreover,
r

p kR ( xi )

k
k =1

Thus D2

i =1

<
>

1
R

r
>
<

k
k =1

i =1

p kR ( xi )

11
R

= D2

according as R <> 1 . (2.25)

D1 according as R <> 1 , which implies that D < 0 in view of the sign of

R
according as R <> 1 . This proved that H R (P ) is concave function P
R 1

(9) For simplicity of notation we set maxi pi = p k , Assuming n0 =1,2,


International Journal in IT and Engineering
http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

ISSN: 23211776

Case I: when R >1, then pi p k


R

n0

pi pk 1 i n

no

pi
i =1

n0

pi

)R (

)R

n0

pi

( no(p k )
R

i =1

i 1

no( p k )

pk

= ( no R ( p k ) R
R

= n0 R p k

i =1

Thus

n0

pi

)R

= n0 R p k , It is also noted that for R > 1

i =1

n0

pi

1
R

n0

pk

pk

i =1

pi

1
R

n0R p k

i =1

By taking limit for R , on each side of (2.29), we obtain


n0

lim

n0
i =1
i =1

And finally

1 (p
pi no
k )

pi

R
=1
R R 1

= p k = max . pi Also we know that

lim

R
lim H R (P ) = lim
1
R
R R 1

n0

piR
i =1

1
R

= 1 max . pi
i

i.e. lim H R (P ) = 1 max . pi This completes the proof of theorem 2.


R
Property (9) is of particular interest since it provides us with a direct interpretation
of the value of R which can be chosen. It shows that for increasing R, the
probability, say pk , which has the largest value tends to dominate the R-norm
information of the distribution P. Therefore the R-norm information for large
values of R seems appropriate for those applications in which we are mainly
International Journal in IT and Engineering
http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

interested in events with large probability

ISSN: 23211776

It is interest to relate the R-norm

information to the information of order and of type and Shannons


information measure .As may be expected this depends on the values of R, and

Theorem: Let H ( p ) be the information of order [], and H ( p ) the information


of type Havedra and Charvat, 1967; Daroczy, 1970) .Then for
= R It holds that
H R ( p) =

R
1 1 (1 21 R ) H ( p )
R 1

H R ( p) =

R
1 R
1 exp
H ( p)
R 1
R

1
R

for = R we have

Since the definition of the information measure

of order and of type , given by

1
H (p) =
log
1
H ( p) =

R.H.S of

1
1
1 21

i.e.

pi

, > 0, 1

And

i =1

pi

, > 0, 1

To prove this theorem, let us consider the

i =1

R
1 R
1 exp
H ( p)
R 1
R

R
1 R
1 exp
H ( p)
R 1
R

R
1 R 1
=
1 exp
log
R 1
R 1

pi

Put = R we have

i =1

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

R
1 R
1 exp
H ( p)
R 1
R

i.e.

= R 1

R
1 1 (1 21 R ) H ( P)
R 1

1
R

i =1

1
R

R
1
1 1 (1 21 R )
1
R 1
1 21

R
1
1 1 (1 21 R )
1
Put = R =
R 1
1 21 R

= R 1
R 1

pi

= H R ( p) Hence proved

pi

R 1

R
1 1 (1 21 R ) H ( P)
R 1

1
R

ISSN: 23211776

pi

pi

1
R

i =1

1
R

i =1

1
R

= H R ( p ) Thus we have

i =1

R
1 1 (1 21 R ) H ( P)
R 1

1
R

= H R ( p ) Hence proved the theorem.

Now we

discuss the most interesting property of the R-norm information measure


i.e.

lim R 1 H R ( p ) = H S ( p)

Where H S ( p) denotes the Shannons information measure.


Since by definition H R ( p) = R 1
R 1

1
R

pi
i =1

lim R 1 H R (p) = lim R 1

R
1
R 1

pi
i =1

0
(form) I
0

International Journal in IT and Engineering


http://www.ijmr.net

IJITE

Vol.01 Issue-07, (December, 2013)

ISSN: 23211776

Then by LHospital Rule, we have


n

lim R 1 H R ( p ) = lim R1 1. 1 (

pi )

1
R

i =1

If we take T =

pi

1
R

d n R
_ R 0 ( ( Pi ) R )
dR i =1

1
R

Taking log both sides log T= log

i =1

pi

i =1

REFRENCES
[1].

ACZEL, J. AND

DARCOZY, Z. (1975), On Measure of Information and their

Characterizations, Academic Press, New York.


[2]. ARIMOTO , S. (1971), Information theoretical considerations on problems, Inform. Contr.
19, 181-194.
[3]. BECKENBACH, E.F. AND BELLMAN, R.(1971),Inequalities, Springer- Verlag, Berlin.
[4]. BOEKEE, D.E. AND VAR DER LUBEE, J. C.A. (1979), Some aspects of error bounds
in features selection, Pattern Recognition 11, 353-360.
[5]. CAMPBELL ,L.L (1965), A Coding Theorem and Renyis Entropy, Information and
Control, 23, 423-429.
[6]. DAROCZY, Z. (1970), Generalized information function, Inform. Contr.16, 36-51.
[7].

D. E. BOEKKE AND J. C. A. VAN DER LUBBE ,R-Norm Information Measure,


Information and Control. 45, 136-155(1980

[8]. GYORFI ,L. AND NEMETZ, T. (1975), On the dissimilarity of

probability measure, in

Proceedings Colloq. on Information Theory, Keshtely, Hungary.


[9]. HARDY, G. H., LITTLEWOOD, J. E., AND POLYA, G.(1973), Inequalities Cambridge
Univ. Press, London /New York

International Journal in IT and Engineering


http://www.ijmr.net

IJITE
[10].

Vol.01 Issue-07, (December, 2013)

ISSN: 23211776

HAVDRA, J. AND CHARVAT, F. (1967). Quantification method of Classification


processes, Concept of structural - entropy, Kybernetika 3, 30-35.

[11]. Nath, P.(1975), On a Coding Theorem Connected with Renyis Entropy Information and
Control 29, 234-242.
[12]. O.SHISHA, Inequalities,Academic Press, New York.
[13].

RENVI, A. (1961),On measure of entropy and information, in proceeding, Fourth


Berkeley Symp. Math. Statist. Probability No-1, pp. 547-561.

[14]. R. P. Singh (2008), Some Coding Theorem for weighted Entropy of order .. Journal of
Pure and Applied Mathematics Science : New Delhi.

International Journal in IT and Engineering


http://www.ijmr.net

Anda mungkin juga menyukai