Concept in probability theory
Modified Kumaraswamy
Probability density function
Cumulative distribution function
Parameters
α
>
0
{\displaystyle \alpha >0\,}
(real)
β
>
0
{\displaystyle \beta >0\,}
(real) Support
x
∈
(
0
,
1
)
{\displaystyle x\in (0,1)\,}
PDF
α
β
e
α
−
α
/
x
(
1
−
e
α
−
α
/
x
)
β
−
1
x
2
{\displaystyle {\frac {\alpha \beta \mathrm {e} ^{\alpha -\alpha /x}(1-\mathrm {e} ^{\alpha -\alpha /x})^{\beta -1}}{x^{2}}}}
CDF
1
−
(
1
−
e
α
−
α
/
x
)
β
{\displaystyle 1-(1-\mathrm {e} ^{\alpha -\alpha /x})^{\beta }}
Quantile
α
α
−
log
(
1
−
(
1
−
u
)
1
/
β
)
{\displaystyle {\frac {\alpha }{\alpha -\log(1-(1-u)^{1/\beta })}}}
Mean
α
β
e
α
∑
i
=
0
∞
(
−
1
)
i
(
β
−
1
i
)
e
α
i
Γ
[
0
,
(
i
+
1
)
α
]
{\displaystyle \alpha \beta \mathrm {e} ^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}\Gamma \left[0,\left(i+1\right)\alpha \right]}
Variance
α
2
β
e
α
∑
i
=
0
∞
(
−
1
)
i
(
β
−
1
i
)
e
α
i
(
i
+
1
)
Γ
[
−
1
,
(
i
+
1
)
α
]
−
μ
2
{\displaystyle \alpha ^{2}\beta e^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}(i+1)\Gamma \left[-1,\left(i+1\right)\alpha \right]-\mu ^{2}}
MGF
α
β
e
α
∑
i
=
0
∞
(
−
1
)
i
(
β
−
1
i
)
e
α
i
(
α
+
α
i
)
h
−
1
Γ
[
1
−
h
,
(
i
+
1
)
α
]
{\displaystyle \alpha \beta e^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}(\alpha +\alpha i)^{h-1}\Gamma \left[1-h,\left(i+1\right)\alpha \right]}
In probability theory , the Modified Kumaraswamy (MK) distribution is a two-parameter continuous probability distribution defined on the interval (0,1). It serves as an alternative to the beta and Kumaraswamy distributions for modeling double-bounded random variables. The MK distribution was originally proposed by Sagrillo, Guerra, and Bayer [ 1] through a transformation of the Kumaraswamy distribution .
Its density exhibits an increasing-decreasing-increasing shape, which is not characteristic of the beta or Kumaraswamy distributions. The motivation for this proposal stemmed from applications in hydro-environmental problems.
Probability density function [ edit ]
The probability density function of the Modified Kumaraswamy distribution is
f
X
(
x
;
θ
)
=
α
β
x
α
−
α
/
x
(
1
−
e
α
−
α
/
x
)
β
−
1
x
2
{\displaystyle f_{X}\left(x;{\boldsymbol {\theta }}\right)={\frac {\alpha \beta x^{\alpha -\alpha /x}(1-\mathrm {e} ^{\alpha -\alpha /x})^{\beta -1}}{x^{2}}}}
where
θ
=
(
α
,
β
)
⊤
{\displaystyle {\boldsymbol {\theta }}=(\alpha ,\beta )^{\top }}
,
α
>
0
{\displaystyle \alpha >0}
and
β
>
0
{\displaystyle \beta >0}
are shape parameters.
Cumulative distribution function [ edit ]
The cumulative distribution function of Modified Kumaraswamy is given by
F
X
(
x
;
θ
)
=
1
−
(
1
−
e
α
−
α
/
x
)
β
{\displaystyle F_{X}\left(x;{\boldsymbol {\theta }}\right)=1-(1-\mathrm {e} ^{\alpha -\alpha /x})^{\beta }}
where
θ
=
(
α
,
β
)
⊤
{\displaystyle {\boldsymbol {\theta }}=(\alpha ,\beta )^{\top }}
,
α
>
0
{\displaystyle \alpha >0}
and
β
>
0
{\displaystyle \beta >0}
are shape parameters.
The inverse cumulative distribution function (quantile function) is
Q
X
(
u
;
θ
)
=
α
α
−
log
(
1
−
(
1
−
u
)
1
/
β
)
{\displaystyle Q_{X}\left(u;{\boldsymbol {\theta }}\right)={\frac {\alpha }{\alpha -\log(1-(1-u)^{1/\beta })}}}
The hth statistical moment of X is given by:
E
(
X
h
)
=
α
β
e
α
∑
i
=
0
∞
(
−
1
)
i
(
β
−
1
i
)
e
α
i
(
α
+
α
i
)
h
−
1
Γ
[
1
−
h
,
(
i
+
1
)
α
]
{\displaystyle {\textrm {E}}\left(X^{h}\right)=\alpha \beta \mathrm {e} ^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}(\alpha +\alpha i)^{h-1}\Gamma \left[1-h,\left(i+1\right)\alpha \right]}
Measure of central tendency , the mean
(
μ
)
{\displaystyle (\mu )}
of X is:
μ
=
E
(
X
)
=
α
β
e
α
∑
i
=
0
∞
(
−
1
)
i
(
β
−
1
i
)
e
α
i
Γ
[
0
,
(
i
+
1
)
α
]
{\displaystyle \mu ={\text{E}}(X)=\alpha \beta \mathrm {e} ^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}\Gamma \left[0,\left(i+1\right)\alpha \right]}
And its variance
(
σ
2
)
{\displaystyle (\sigma ^{2})}
:
σ
2
=
E
(
X
2
)
=
α
2
β
e
α
∑
i
=
0
∞
(
−
1
)
i
(
β
−
1
i
)
e
α
i
(
i
+
1
)
Γ
[
−
1
,
(
i
+
1
)
α
]
−
μ
2
{\displaystyle \sigma ^{2}={\text{E}}(X^{2})=\alpha ^{2}\beta \mathrm {e} ^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}(i+1)\Gamma \left[-1,\left(i+1\right)\alpha \right]-\mu ^{2}}
Parameter estimation [ edit ]
Sagrillo, Guerra, and Bayer[ 1] suggested using the maximum likelihood method for parameter estimation of the MK distribution. The log-likelihood function for the MK distribution, given a sample
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
, is:
ℓ
(
θ
)
=
n
α
+
n
log
(
α
)
+
n
log
(
β
)
−
α
∑
i
=
1
n
1
x
i
−
2
∑
i
=
1
n
log
(
x
i
)
+
(
β
−
1
)
∑
i
=
1
n
log
(
1
−
e
α
−
α
/
x
i
)
.
{\displaystyle {\begin{aligned}\ell ({\boldsymbol {\theta }})=&\,n\alpha +n\log \left(\alpha \right)+n\log \left(\beta \right)-\alpha \sum _{i=1}^{n}{\frac {1}{x_{i}}}-2\sum _{i=1}^{n}\log(x_{i})\\&+(\beta -1)\sum _{i=1}^{n}\log(1-\mathrm {e} ^{\alpha -\alpha /x_{i}}).\end{aligned}}}
The components of the score vector
U
(
θ
)
=
[
∂
ℓ
(
θ
)
∂
α
,
∂
ℓ
(
θ
)
∂
β
]
{\displaystyle U\left({\boldsymbol {\theta }}\right)=\left[{\frac {\partial \ell ({\boldsymbol {\theta }})}{\partial \alpha }},{\frac {\partial \ell ({\boldsymbol {\theta }})}{\partial \beta }}\right]}
are
∂
ℓ
(
θ
)
∂
α
=
n
+
n
α
+
(
β
−
1
)
e
α
∑
i
=
1
n
x
i
−
1
x
i
(
e
α
−
e
α
/
x
i
)
−
∑
i
=
1
n
1
x
i
{\displaystyle {\begin{aligned}{\frac {\partial \ell ({\boldsymbol {\theta }})}{\partial \alpha }}=n+{\frac {n}{\alpha }}+(\beta -1)\mathrm {e} ^{\alpha }\sum _{i=1}^{n}{\frac {x_{i}-1}{x_{i}(\mathrm {e} ^{\alpha }-\mathrm {e} ^{\alpha /x_{i}})}}-\sum _{i=1}^{n}{\frac {1}{x_{i}}}\end{aligned}}}
and
∂
ℓ
(
θ
)
∂
β
=
n
β
+
∑
i
=
1
n
log
(
1
−
e
α
−
α
/
x
i
)
{\displaystyle {\begin{aligned}{\frac {\partial \ell ({\boldsymbol {\theta }})}{\partial \beta }}={\frac {n}{\beta }}+\sum _{i=1}^{n}\log(1-\mathrm {e} ^{\alpha -\alpha /x_{i}})\end{aligned}}}
The MLEs of
θ
{\displaystyle {\boldsymbol {\theta }}}
, denoted by
θ
^
=
(
α
^
,
β
^
)
⊤
{\displaystyle {\hat {\boldsymbol {\theta }}}=\left({\hat {\alpha }},{\hat {\beta }}\right)^{\top }}
, are obtained as the simultaneous solution of
U
(
θ
)
=
0
{\displaystyle {\boldsymbol {U}}({\boldsymbol {\theta }})={\boldsymbol {0}}}
, where
0
{\displaystyle {\boldsymbol {0}}}
is a two-dimensional null vector.
If
X
∼
MK
(
α
,
β
)
{\displaystyle X\sim {\textrm {MK}}(\alpha ,\beta )}
, then
{
1
−
1
X
}
∼
K
(
α
,
β
)
{\displaystyle \left\{1-{\frac {1}{X}}\right\}\sim {\textrm {K}}(\alpha ,\beta )}
(Kumaraswamy distribution )
If
X
∼
MK
(
α
,
β
)
{\displaystyle X\sim {\textrm {MK}}(\alpha ,\beta )}
, then
1
X
−
1
∼
{\displaystyle {\frac {1}{X}}-1\sim }
Exponentiated exponential (EE) distribution[ 2]
If
X
∼
MK
(
1
,
β
)
{\displaystyle X\sim {\textrm {MK}}(1,\beta )}
, then
exp
{
1
−
1
X
}
∼
Beta
(
1
,
β
)
{\displaystyle \exp \left\{1-{\frac {1}{X}}\right\}\sim {\textrm {Beta}}(1,\beta )}
. (Beta distribution )
If
X
∼
MK
(
α
,
1
)
{\displaystyle X\sim {\textrm {MK}}(\alpha ,1)}
, then
exp
{
1
−
1
X
}
∼
Beta
(
α
,
1
)
{\displaystyle \exp \left\{1-{\frac {1}{X}}\right\}\sim {\textrm {Beta}}(\alpha ,1)}
.
If
X
∼
MK
(
α
,
β
)
{\displaystyle X\sim {\textrm {MK}}(\alpha ,\beta )}
, then
1
X
−
1
∼
Exp
(
α
)
{\displaystyle {\frac {1}{X}}-1\sim {\textrm {Exp}}(\alpha )}
(Exponential distribution ).
The Modified Kumaraswamy distribution was introduced for modeling hydro-environmental data. It has been shown to outperform the Beta and Kumaraswamy distributions for the useful volume of water reservoirs in Brazil.[ 1]
^ a b c Sagrillo, M.; Guerra, R. R.; Bayer, F. M. (2021). "Modified Kumaraswamy distributions for double bounded hydro-environmental data". Journal of Hydrology . 603 . doi :10.1016/j.jhydrol.2021.127021 .
^ Gupta, R.D.; Kundu, D (1999). "Theory & Methods: Generalized exponential distributions". Australian & New Zealand Journal of Statistics . 41 : 173–188. doi :10.1111/1467-842X.00072 .