IJSM
Vol. 4(1), pp. 082-098, June, 2017. www.premierpublishers.org. ISSN: 2375-0499
Research Article
In this paper, we have introduced the new family of distribution called Exponentiated Minimax
distribution (EMD). We have studied some statistical properties of this distribution that including the
moments, moment generating function, characteristic function, median, mode, harmonic mean,
entropy, reliability function, hazard function and the reverse hazard function. Also the unknown
parameter of this distribution is estimated by the method of maximum likelihood.
Keywords: Minimax distribution, Exponentiated Minimax distribution, Structural properties and Maximum Likelihood
Estimation.
INTRODUCTION
The Minimax probability distribution was originally proposed by McDonald (1984). Jones (2007) explored the genesis of
the Minimax distribution and made some similarities between Beta and Minimax distributions. The Minimax distribution
denoted by Minimax ( , ) is a family of continuous probability distribution defined on the interval 0,1 with
probability density function (pdf) given by;
g ( x ; , ) x 1 (1 x ) 1 ; 0 x 1 , 0 (1.1)
The corresponding cumulative density function (cdf) is given by;
G( x ; , ) 1 (1 x ) ; 0 x 1 , 0 (1.2)
where and are two shape parameters. The minimax density is unimodal, uniantimodal, increasing, decreasing or
constant depending on the values of its parameters. Minimax distribution is a special case of generalized beta
distribution.
The Exponentiated distributions have been studied widely in statistics since 1995 and a number of authors have
developed various classes of these distributions; Mudholkar et al (1995) proposed the exponentiated Weibull
distribution. Gupta et al. (1998) first proposed a generalization of the standard exponential distribution, called the
exponentiated exponential (EE) distribution. Nadarajah and Kotz (2003) defined and studied the exponentiated Frechet
distribution, and Nadarajah (2005) defined and studied the exponentiated Gumbel distribution. Barreto-Souza and
Cribari-Neto (2009) developed the exponentiated exponential-Poisson distribution, whereas Silva et al. (2010) proposed
the exponentiated exponential-geometric distribution. Lemonte and Cordeiro (2011) introduced the exponentiated
generalized inverse Gaussian distribution. Cordeiro et al. (2013) proposed the beta exponentiated Weibull distribution,
whereas Adepoju, K.A Chukwu A.U and Shittu, O.I (2014) defined and studied the exponentiated nakagami distribution.
Oguntunde P. E (2015) introduced the exponentiated weighted exponential distribution. Recently, Fatima and Ahmad
(2016) considered the characterization and bayesian estimation of Minimax distribution. Here, in the same way, we have
generalized the Minimax distribution.
The Exponentiated family of distribution is derived by powering a positive real number to the cumulative distribution
function (CDF) of an arbitrary parent distribution by a shape parameter say; 0 . Its pdf is given by;
f(
x
)g(
x)
G (
x)1 (2.1)
f ( x ) x 1 (1 x ) 1 1 (1 x )
1
; 0 x 1 , , , 0 (2.3)
The graphs of density function are plotted for various values of , and are given in figure (1.1)
Figure1.1 Exponentiated Minimax densities for selected values of ( , and ) ;( a) 2 , 3 and 2 ; (b)
0.2 , 0.3 and 0.2 ; (c) 1, 1and 1 ; (d) 0.2 , 0.3 and 4 ; (e)
0.2 , 3 and 4 ;(f) 2 , 3 and 0.4 ; (g) 2 , 0.3 and 0.4 ;(h)
2 , 0.4 and 0.2 ; (i) 0.2 , 3 and 0.4 .
Figure 1.1 illustrates some of the possible shapes of Exponentiated Minimax distribution for different values of the
parameters , and . Figure 1.1 shows that the density function of Exponentiated Minimax are unimodal,
uniantimodal, increasing, decreasing or constant depending on the values of parameters.
The corresponding (cdf) of the Exponentiated Minimax distribution is given by
F
(
x
)
1
(
1x);
0
x
1
,,,
0
( 2 .4 )
A Generalization of Minimax Distribution
Int. J. Stat. Math. 084
Special Cases
Case 1: When 1 , then Exponentiated Minimax distribution (2.3) reduces to Minimax distribution (MD) with
probability density function as:
f ( x ) x 1 (1 x ) 1 ; 0 x 1 , 0 (3.1)
Case 2: When 1, then Exponentiated Minimax distribution (2.3) reduces to Uniform distribution (UD) with
probability density function as:
f(x) 1; 0 x 1 (3.2)
Case 3: When 1, then Exponentiated Minimax distribution (2.3) reduces to Power distribution (PD) with
probability density function as:
f ( x; ) x 1 ; 0 x 1 , 0 (3.3)
Case 4: When 1, then Exponentiated Minimax distribution (2.3) reduces to one parameter Minimax distribution
with probability density function as:
f ( x; ) (1 x) 1 ; 0 x 1 , 0 (3.4)
Reliability Analysis
R
(
x
)
1
1
(
1x);
0
x
1
(4.1)
The corresponding plot for the reliability function at various values of , and is shown in Figure 2.1.
H
(
x
)
1
x
(
1x
1
)1
(
1x)
1
;
0
x
1
( 4 .2 )
1
1
(
1
x)
(iii) Reverse Hazard function (x )
The reverse hazard function can be interpreted as an approximate probability of failure in x, x d , given that the
( x)
failure had occurred in [0, x ] . The reverse hazard function is defined to be
(
x
)
F
f
(
x
(
x
)
1
) x
(
1
1
x
(
1
)
1
1
x
)
(
1x)
1
x
1
(
1
x)
1
(
x)
1(
1x)
(4.3)
Theorem 5.1 Let X ( X 1 , X 2 , , X n ) be a random sample of size n from the Exponentiated Minimax distribution
with probability density function
A Generalization of Minimax Distribution
Int. J. Stat. Math. 086
Then
f
(
x
)
1
x
(
1
x
1
)
1
(
1
x
);
0
x
1
,,,
0
1
r (
j
j
1
)
E
(
Xr
)
1
r
B
,
1
r 1,2,
j0
1j
j
!
Proof: Since we know that the rth moment of a random variable x is given by
1
Xf(x)dx
)
r r
E
(X (5.1)
0
Now using eq. (2.3) in eq. (5.1), we have
E
(
Xr
)
r
X
1
x
(
1x
1
)1
(
1
1
x
0
) dx
1
xt
, ,; asxt
,
Put 1(1x ) t ; as 0 0 1 1
,
1
1
x
1
(
1
x
)1
dx
dtand x 1 (1 t )
r
11
1 1
E ( X r ) 1 (1 t ) t 1
dt (5 .2 )
0
We apply the series expansion
(1) j b j
(1 z ) b1 (b j) j! z
j 0
r r
1
11 (1) j
1 j
Similarly, 1 (1 t ) (1 t ) (5.3)
j 0 r
1 j j!
By using (5.3) in (5.2), we get
r
(1) j 1 1 j
E( X r ) t 1
11
0
(1 t ) dt
j 0 r
1 j j!
On solving the above equation, we get
r (1) j j
E ( X r ) 1 B , 1 (5 .4 )
j 0 r 1 j j!
1 (1) j j
1 1 B , 1 (5.5)
j 0 1 1 j j!
After substituting the values of eq. (5.5), eq. (5.6) and eq. (5.8) in eq. (5.9), we have
3 (1) j j 2 (1) j j
3 1 B , 1 3 1 B , 1
j 0 3 1
j j!
j 0 2 1
j j!
3
1
(1) j 1 (1) j j
j
1 B , 1 2 1 B , 1 (5.10)
j 0 1 1 j j! j 0 1 1
j j!
4 (1) j j
4 1 B , 1 (5.11)
j 0 4 1
j j!
Thus,
44
6
4
3
31
2
21
4
1 (5.12)
After substituting the values of eq. (5.5), eq. (5.6), eq. (5.8) and eq. (5.11) in eq. (5.12), we have
4 (1) j j 3 (1) j j
4 1 B , 1 4 1 B , 1
j 0 4 1 j j! j 0 3 1 j j!
1 (1) j 2 (1) j j
j
1 B , 1 6 1 B , 1
j 0 1 1 j j! j 0 2 1 j j!
2 4
1 (1) j 1 (1) j
j j
1 B , 1 3 1 B , 1
j 0 1 1 j j! j 0 1 1 j j!
(5.13)
Coefficient Of Variation
It is the ratio of standard deviation and mean. Usually it is denoted by C.V. and is given by
C
.
V
(5.14)
By using the value of eq. (5.5) and eq. (5.7) in eq. (5.14), we get
2
2 (1) j 1 (1) j j
j
1 B , 1 1 B , 1
j 0 2 1
j j!
j 0 1 1
j j!
C.V (5.15)
1 (1) j j
1 B , 1
j 0 1 1 j j!
(i) Skewness: The most popular way to measure the Skewness and kurtosis of a distribution function rests upon ratios
of moments. Lack of symmetry of tails (about mean) of frequency distribution curve is known as Skewness. The formula
for measure of Skewness given by Karl Pearson in terms of moments of frequency distribution is given by
2
1 3
3
(5.16)
2
After using eq. (5.7) and eq. (5.10) in eq. (5.16), we have
2
3 (1) j j 2 (1) j j
1 B , 1 3 1 B , 1
j 0 3 1 j j!
j 0 2 1 j j!
3
1
1
(1) j j 1 (1) j j
B , 1 2 1 B , 1
j 0 1 1 j j! j 0 1 1 j j!
1
2 3
2 1 (1) j j 1 (1) j j
B , 1 1 B , 1
j 0 2 1 j j! j 0 1 1 j j!
3 (1) j j 2 (1) j j
1 B , 1 3 1 B , 1
j 0 3 1 j j!
j 0 2 1 j j!
3
1 1 (1) j j 1 (1) j j
B , 1 2 1 B , 1
j 0 1 1 j j! j 0 1 1 j j!
1 1
2 3/ 2
2 1 (1) j j 1 (1) j j
B , 1 1 B , 1
j 0 2 1 j j! j 0 1 1 j j!
(5.17)
(ii) Kurtosis: Kurtosis is the degree of peakedness of a distribution, defined as normalized form of the fourth central
moment 4 of a distribution. There are several flavors of kurtosis commonly encountered, including the kurtosis proper,
2
denoted and defined by
4
2 (5.18)
22
After using eq. (5.7) and eq. (5.13) in eq. (5.18), we have
4 (1) j j 3 (1) j j
1 4
B , 1 4 1
B , 1
j 0 3 1 j j!
j 0
1 j j!
1 (1) j j 2 (1) j j
1 B , 1 6 1 B , 1
j 0 1 1 j j!
j 0 2 1 j j!
2 4
1 (1) j j 1 (1) j j
1 B , 1 3 1 B , 1
j 0 1 1 j j! j 0 1 1 j j!
2 2
2
2 1 (1) j j 1 (1) j j
B , 1 1 B , 1
j 0 2 1 j j! j 0 1 1 j j!
(5.19)
2 2 3
Using equation (5.19)
4 (1) j j 3 (1) j j
1 4
B , 1 4 1
B , 1
j 0 3 1 j j!
j 0
1 j j!
1 (1) j j 2 (1) j j
1 B , 1 6 1 B , 1
j 0 1 1 j j!
j 0 2 1 j j!
2 4
1 (1) j j 1 (1) j j
1 B , 1 3 1 B , 1
j 0 1 1 j j! j 0 1 1 j j!
2 3
2 2
2 1 (1) j j 1 (1) j j
B , 1 1 B , 1
j 0 2 1
j j!
j 0 1 1
j j!
(5.20)
1
1 1 1
x 1 (1 x ) 1 1 (1 x ) dx
H 0x
Put 1(1x) t ; as
xt
,
0 ,; asxt
0 ,
1 1
,
1
1
1
x(1
1
x) dx
dtand x 1 (1 t )
1
1 1
1 1
1
1 (1 t ) t 1
dt
H
0
1
(1) j 1 1
j
1
t 1
11
(1 t ) dt
H j 0 1
1 j j! 0
1
(1) j 1
1
B , j 1
(6.1)
H j 0 1
1 j j!
m
1
F ( m) P ( X m ) f ( x)dx 2
o
m
1 1
F (m) x 1 (1 x ) 1 1 (1 x ) dx
0
2
1(1 m )
put 1 (1 x ) y ; as x 0 , y 0 1 1
x 1
(1 x ) 1
dx dy ; as x m , y 1 (1 m )
y dy
2
0
y
1(1 m )
0
1
2
1 (1 m ) 12
log 1 (1 m ) log
1
2
log 1 (1 m ) 1
log 2
1 (1 m ) e log 21 /
1 /
1 (1 m ) 2
1 /
(1 m ) 1 2
1 / 1
(1 m ) 1 2
1
1 / 1
m 1 1 2 ( 7 .2 )
Substituting the values of eq. (7.2) and eq. (5.5) in eq. (7.3), we have
1 1
1 / 1
(1) j
j
Mode 31 1 2 2 1 B , 1
j 0 1 1 j j!
r
(1) j
1
B , j 1
r
t
M X (t )
r 0 j 0 r ! r 1
j j!
This completes the proof.
Theorem 8.2 Let X have a Exponentiated Minimax distribution. Then characteristic function of X denoted by X (t )
is given by:
r
(1) j 1
B , j 1
r
(it )
X (t ) E (e itx ) (8.2)
r 0 j 0 r ! r
1 j j!
Proof: By definition
1
X (t ) E (e ) e itx f ( x )dx
itx
0
Using Taylor series
1
(itx) 2
X (t ) 1 itx f ( x )dx
0
2!
1
(it ) r
X (t ) x
r
f ( x)dx
r 0 r ! 0
(it ) r
X (t ) E( X r )
r 0 r !
r
(1) j 1
B , j 1
r
(it )
X (t )
r 0 j 0 r ! r
1 j j!
This completes the proof.
The Shannon entropy of a random variable X is a measure of the uncertainty and is given by E log f ( x) ,
where f (x ) is the probability function of the random variable X. Shannon entropy of Exponentiated Minimax distribution
are obtained as:
H ( x) E[log f ( x)] E[log{ x 1 (1 x ) 1{1 (1 x ) } 1}]
log[] ( 1) Elog x ( 1) E log(1 x ) ( 1) E[log1 (1 x ) ]
H ( x) log[ ] ( 1) I1 ( 1) I 2 ( 1) I 3 (9.1)
1
Now, I 1 E log x log x{x 1 (1 x ) 1}{1 (1 x ) } 1 dx
0
1
log x {x 1 (1 x ) 1}{1 (1 x ) } 1 dx (9 .2 )
0
put
x
y ;as
x 0
,y0
1
dy
x dx ;asx 1
,y 1
1
log y (1 y ) 1{1 (1 y ) } 1 dy (9.3)
0
We apply the series expansion
(1) j
{1 (1 y) } 1 (1 y ) j (9 .4 )
j 0 ( j ) j!
(1) j
1
j 0 ( j ) j! 0
y11 (1 y ) j 1 log ydy ( 9 . 5)
We know that
1
y
a 1
(1 y ) b1 log ydy B(a, b) (a) (a b)
0
1
I 2 E log(1 x ) log(1 x )[x 1 (1 x ) 1{1 (1 x ) } 1 ]dx (9.7 )
put x y ; as x 0 , y 0
x 1 dx dy ; as x 1, y 1
1
log(1 y )(1 y ) 1{1 (1 y ) } 1 dy ( 9 . 8)
0
We apply the series expansion
(1) j
{1 (1 y) } 1 (1 y ) j (9 .9 )
j 0 ( j ) j!
(1) j
1
y11 (1 y ) j 1 log(1 y )dy (9.10)
j 0 ( j ) j! 0
We know that
1
y
a 1
(1 y ) b1 log(1 y )dy B(a, b) (b) (a b)
0
On solving the equation (9.10), we get
(1) j
I 2 B(1, j ) ( j ) ( j 1) (9.11)
j 0 ( j ) j!
1
I 3 E log 1 (1 x ) log 1 (1 x ) [x 1 (1 x ) 1{1 (1 x ) } 1 ]dx
0
(9.12)
put 1 (1 x ) y ; as x 0 , y 0
1 1
(1 x ) x dx dy ; as x 1, y 1
1
log y y 1 (1 y )11 dy (9.13)
0
On solving the above equation (9.13), we get
1
I 3 B( ,1) ( ) ( 1) ( ) ( 1)
( 1)
I 3 ( ) ( 1) (9.14)
(1) j 1
H ( x) B(1, j ) 1 (1) ( j 1) (1 ) ( j ) ( j 1)
j 0 ( j ) j!
( 1) ( ) ( 1) log[ ] (9.15)
Order Statistics
Let X (1) denote the smallest of { X 1,X2, ,Xn}, X ( 2 ) denote the second smallest of {X1,X2, ,X n} , and
similarly X (k ) denote the th smallest of {X1,X2,,X n}. Then the random variables X ),X
(1 ),
(2 ,X ), called the
(n
n
!
f(
k
) f(
x
)[
F(
xk
1
)][
1
F(
xn
)]k
(10.1)
X
(
k1
)!
(
nk)!
For k 1,2, , n .
th
The pdf of the k order statistic is defined as:
f X (k )
n!
(k 1)!(n k )!
x 1 (1 x ) 1 1 (1 x ) 1
[1 (1 x ) ] ( k 1)
`
[1 {(1 (1 x ) } ]nk
n!
f X (k ) x 1 (1 x ) 1[1 (1 x ) ]k 1[1 {(1 (1 x ) } ]( nk )
(k 1)!(n k )!
(10.2)
For k n in (10.2)
The pdf of the largest statistics X (n ) is therefore:
Maximum likelihood Estimation for the shape Parameter of Exponentiated Minimax distribution assuming
shape parameters and are to be known:
n n n
L( x / ) n n n xi 1 (1 x ) 1 {1 (1 x ) } 1
i 1 i 1 i 1 (11.1)
n n
ln L( x / ) n ln n ln n ln ( 1) ln xi ( 1) ln(1 xi )
i 1 i 1
n
(11.2)
( 1) ln{1 (1 xi ) }
i 1
As shape parameter and are assumed to be known, the ML estimator of shape parameter is obtained by solving
the
ln L( x / ) n n
ln{1 (1 xi ) } 0
i 1
n
n
(11.3)
ln{1 (1 xi
) }
i 1
CONCLUSION
We define a three-parameter Exponentiated Minimax distribution (EMD) as a generalization of the Minimax distribution.
The resulting model is bounded on a (0, 1) support and its shape of the model could be constant or increasing,
decreasing (depending on the values of the parameters). Minimax distribution, Power distribution and Uniform
distribution are found to be sub-models of the proposed model. Some statistical properties of this distribution are
discussed and studies. It also estimates the unknown parameter of this distribution using the method of maximum
likelihood estimation.
REFERENCES
Adepoju, K.A Chukwu, A.U and Shittu, O.I (2014). Statistical Properties of the Exponentiated Nakagami Distribution,
Journal of Mathematics and System Science, 4, 180-185.
Barreto-Souza,W. and Cribari-Neto, F. (2009). A generalization of the exponential-Poisson distribution, Statistics and
Probability Letters, 79, 24932500.
Barreto-Souza, W., Santos, A. H. S. and Cordeiro, G. M. (2010). The beta generalized exponential distribution, Journal
of Statistical Computation and Simulation, 80, 159172.
Cordeiro, G. M, Ortega, E. M., and da Cunha D. C (2013). The Exponentiated Generalized Class of Distributions,
Journal of Data Science, 11, 1-27.
Gupta, R. C., Gupta, P. L. and Gupta, R. D. (1998). Modeling failure time data by Lehmann alternatives,
Communications in Statistics - Theory and Methods, 27, 887-904.
Lemonte A. J., Cordeiro G. M. (2011). The exponentiated generalized inverse Gaussian distribution, Statistics and
Probability Letters, Vol. 81, 4, 506-517.
Jones, M.C. (2007). Connecting distributions with power tails on the real line, the half line and the interval,
International Statistical Review, 75, 5869.
A Generalization of Minimax Distribution
Int. J. Stat. Math. 098
Kawsar Fatima and S.P Ahmad (2016). Characterization and Bayesian Estimation of Minimax Distribution, International
Journal of Modern Mathematical Sciences, 14(4), 423-447.
McDonald, J.B., (1984). Some Generalized Functions for the Size Distribution of Income, Econometrica, 52,647-664.
Mudholkar, G. S., Srivastava, D. K., Freimer, M. (1995). The exponentiated weibull family, Technometrics, 37, 436-445.
Nadarajah, S., Kotz, S. (2003). The exponentiated Frechet distribution, InterStat, December, No.1
Nadarajah, S.(2005). The exponentiated Gumbel distribution with climate application, Environmetrics, 17, 13-23.
Oguntunde PE, (2015). On the Exponentiated WeightedExponential Distribution and Its Basic Statistical Properties,
Applied Science Reports, 10 (3), 160-167.
Citation: Fatima K, Ahmad SP, Chishti TA (2017). A Generalization of Minimax Distribution. International Journal of
Statistics and Mathematics, 4(1): 082-098.
Copyright: 2017 Fatima et al. This is an open-access article distributed under the terms of the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original
author and source are cited.