Acessibilidade / Reportar erro

SOME COMPUTATIONAL ASPECTS TO FIND ACCURATE ESTIMATES FOR THE PARAMETERS OF THE GENERALIZED GAMMA DISTRIBUTION

ABSTRACT

In this paper, we discuss computational aspects to obtain accurate inferences for the parameters of the generalized gamma (GG) distribution. Usually, the solution of the maximum likelihood estimators (MLE) for the GG distribution have no stable behavior depending on large sample sizes and good initial values to be used in the iterative numerical algorithms. From a Bayesian approach, this problem remains, but now related to the choice of prior distributions for the parameters of this model. We presented some exploratory techniques to obtain good initial values to be used in the iterative procedures and also to elicited appropriate informative priors. Finally, our proposed methodology is also considered for data sets in the presence of censorship.

Keywords:
Bayesian Inference; Classical inference; Generalized gamma distribution; Random censoring

1 INTRODUCTION

The Generalized gamma (GG) distribution is very flexible to be fitted by reliability data due to its different forms for the hazard function. This distribution was introduced by Stacy 2323 STACY EW. 1962. A generalization of the gamma distribution. The Annals of Mathematical Statistics, 1187-1192. and has probability density function (p.d.f.) given by

f ( t | θ ) = α Γ ( ϕ ) μ α ϕ t α ϕ - 1 exp - ( μ t ) α , (1)

where t > 0 and θ = (ϕ, μ, α) is the vector of unknown parameters. The parameters α > 0 and ϕ > 0 are shape parameters and μ > 0 is a scale parameter. This parametrization is a standard form of the GG distribution originated from a general family of survival functions with a location and scale changes of the form, Y = log(T) = μ + σW where W has a generalized extreme value distribution with parameter ϕ and α = 1σ. The survival function is given by

S ( t | θ ) = ( μ t ) α 1 Γ ( ϕ ) w ϕ - 1 e - w d w = γ ϕ , ( μ t ) α Γ ( ϕ ) ,

where Γ [y, x] = xwy −1 e wdw is the upper incomplete gamma function. Many lifetime distributions are special cases of the GG distribution such as theWeibull distribution (when ϕ = 1), the gamma distribution (α = 1), log-Normal (limit case when ϕ → ∞) and the generalized normal distribution (α = 2). The generalized normal distribution also is a distribution that includes various known distributions such as half-normal (ϕ = 1/2,μ=1/2σ), Rayleigh (ϕ = 1, μ=1/2σ), Maxwell-Boltzmann (ϕ = 3/2) and chi (ϕ = k/2, k = 1, 2, . . .).

Although the inferential procedures for the gamma distribution can be easily be obtained by classical and Bayesian approaches, especially using the computational advances of last years in terms of hardware and software 22 MOALA AF, RAMOS LP & ACHCAR AJ. 2013. Bayesian inference for two-parameter gamma distribution assuming different noninformative priors. Revista Colombiana de Estadística, 36(2): 319-336., the inferential procedures for the generalized gamma distribution under the maximum likelihood estimates (MLEs) can be unstable (e.g., 88 HAGER HW & BAIN LJ. 1970. Inferential procedures for the generalized gamma distribution. Journal of the American Statistical Association, 65(332): 1601-1609.),(1414 PARR VB & WEBSTER J. 1965. A method for discriminating between failure density functions used in reliability predictions. Technometrics, 7(1): 1-10.),(2424 STACY EW & MIHRAM GA. 1965. Parameter estimation for a generalized gamma distribution. Technometrics, 7(3): 349-358.),(2525 WINGO DR. 1987. Computing maximum-likelihood parameter estimates of the generalized gamma distribution by numerical root isolation. IEEE Transactions on Reliability, 36(5): 586-590.) and its results may depend on the initial values of the parameters chosen in the iterative methods. Ramos et al. 2020 RAMOS PL, ACHCAR JA & RAMOS E. 2014. Método eficiente para calcular os estimadores de máxima verossimilhanca da distribuição gama generalizada. Rev. Bras. Biom., 32(2): 267-281. simplified the MLEs under complete samples and used the obtained results as initial values for censored data. However, the obtained MLEs may not be unique returning more than one root. Additionally, the obtained results lies under asymptotic properties in which are not achieved even for large samples (n > 400) as was discussed by Prentice 1717 PRENTICE RL. 1974. A log gamma model and its maximum likelihood estimation. Biometrika, 61(3): 539-544.. A similar problem appears under the Bayesian approach, where different prior distributions can have a great effect on the subsequent estimates. It is important to point out that, the literature on GG distribution is very extensive, with many studies considering statistical inference as well as others more concentrated in applications 33 ASHKAR F, BOBÉE B, LEROUX D & MORISETTE D. 1988. The generalized method of moments as applied to the generalized gamma distribution. Stochastic Hydrology and Hydraulics, 2(3): 161-174.)-(55 DICICCIO T. 1987. Approximate inference for the generalized gamma distribution. Technometrics, 29(1): 33-40.), (1515 PASCOA MA, ORTEGA EM & CORDEIRO GM. 2011. The kumaraswamy generalized gamma distribution with application in survival analysis. Statistical Methodology, 8(5): 411-433.), (1616 PHAM T. & ALMHANA J. 1995. The generalized gamma distribution: its hazard rate and stressstrength model. IEEE Transactions on Reliability, 44(3): 392-397.), (1818 RAMOS P & LOUZADA F. 2016. The generalized weighted lindley distribution: Properties, estimation and applications. Cogent Mathematics, 3(1): 1256022.), (2121 RAMOS PL, LOUZADA F & RAMOS E. 2016. An efficient, closed-form map estimator for nakagami-m fading parameter. IEEE Communications Letters, 20(11): 2328-2331.), (2222 SHIN JW, CHANG J-H & KIM NS. 2005. Statistical modeling of speech signals based on generalized gamma distribution. IEEE Signal Processing Letters, 12(3): 258-261..

In this study, methods exploring modifications of Jeffreys’ priors to obtain initial values for the parameter estimation are proposed. The obtained equations under complete and censored data were used as initial values to start the iterative numerical procedures as well as to elicited informative prior under the Bayesian approach. An intensive simulation study is presented in order to verify our proposed methodology. These results are of great practical interest since it enable us for the use of the GG distribution in many application areas. The proposed methodology is illustrated in two data sets.

This paper is organized as follows: Section 2 reviews the MLE for the GG distribution. Section 3 presents the Bayesian analysis. Section 4 introduces the methods to obtain good initial values under complete and censored data. Section 5 describes a simulation study to compare both classical and Bayesian approaches. Section 6 presents an analysis of the data set. Some final comments are made in Section 7.

2 CLASSICAL INFERENCE

Maximum likelihood estimates are obtained from from the maximization of the likelihood function. Let T 1 , ..., Tn be a random sample of a GG distribution, the likelihood function is given by

L ( θ ; t ) = α n Γ ( ϕ ) n μ n α ϕ i = 1 n t i α ϕ - 1 exp - μ α i = 1 n t i α . (2)

From α log(L(θ ; t )), μ log(L(θ ; t )) and ϕ log(L(θ ; t )) equal to zero, the obtained likelihood equations are

n ψ ( ϕ ^ ) = n α ^ log ( μ ^ ) + α ^ i = 1 n log ( t i ) n ϕ ^ = μ ^ α ^ i = 1 n t i α ^ n α ^ + n ϕ ^ log ( μ ) + ϕ i = 1 n log ( t i ) = μ ^ α ^ i = 1 n t i α ^ log ( μ ^ t i ) , (3)

where ψ(k) = k log Γ(k) = Γ'(k)Γ(k). The solutions of (3) provide the maximum likelihood estimates 88 HAGER HW & BAIN LJ. 1970. Inferential procedures for the generalized gamma distribution. Journal of the American Statistical Association, 65(332): 1601-1609.), (2424 STACY EW & MIHRAM GA. 1965. Parameter estimation for a generalized gamma distribution. Technometrics, 7(3): 349-358.. Numerical methods such as Newton-Rapshon are required to find the solution of the nonlinear system.

Under mild conditions the maximum likelihood estimators of θ are not biased and asymptotically efficient. These estimators have an asymptotically normal joint distribution given by

( θ ^ ) ~ N k [ ( θ ) , I - 1 ( θ ) ] for n ,

where I (θ ) is the Fisher information matrix

I ( θ ) = 1 + 2 ψ ( ϕ ) + ϕ ψ ' ( ϕ ) + ϕ ψ ( ϕ ) 2 α 2 - 1 + ϕ ψ ( ϕ ) μ - ψ ( ϕ ) α - 1 + ϕ ψ ( ϕ ) μ ϕ α 2 μ 2 α μ - ψ ( ϕ ) α α μ ψ ' ( ϕ ) .

In the presence of censored observations, the likelihood function is

L ( θ ; t , δ ) = α d μ d α ϕ Γ ( ϕ ) n i = 1 n t i δ i α ϕ - 1 exp - μ α i = 1 n δ i t i α i = 1 n ( Γ [ ϕ , ( μ t i ) α ] ) 1 - δ i . (4)

From α log(L(θ ; t, δ )), μ log(L(θ ; t, δ )) and ϕ log(L(θ ; t, δ )) equal to zero, we get the likelihood equations,

i = 1 n 1 - δ i μ t i α ϕ e - μ t i α log μ t i Γ ϕ , μ t i α = d α + d ϕ log μ + ϕ i = 1 n δ i log t i - μ α i = 1 n δ i t i α log μ t i d α ϕ μ - α μ α - 1 i = 1 n δ i t i α = i = 1 n 1 - δ i α t i μ t i α ϕ - 1 e - μ t i α Γ ϕ , μ t i α d α log μ - n ψ ϕ + α i = 1 n δ i log t i = - i = 1 n 1 - δ i Ψ ϕ , μ t i α Γ ϕ , μ t i α (5)

where

Ψ ( k , x ) = k Γ [ k , x ] = x w k - 1 log ( w ) e - w d w .

The solutions of the above non-linear equations provide the MLEs of ϕ, μ and α. Other methods using the classical approach have been proposed in the literature to obtain inferences on the parameters of the GG distribution 11 AHSANULLAH M, MASWADAH M & ALI MS. 2013. Kernel inference on the generalized gamma distribution based on generalized order statistics. Journal of Statistical Theory and Applications, 12(2): 152-172.), (55 DICICCIO T. 1987. Approximate inference for the generalized gamma distribution. Technometrics, 29(1): 33-40.), (99 HUANG P-H & HWANG T-Y. 2006. On new moment estimation of parameters of the generalized gamma distribution using it’s characterization. Taiwanese Journal of Mathematics, 10: 1083-1093..

3 A BAYESIAN APPROACH

The GG distribution has nonnegative real parameters. A joint prior distribution for ϕ, μ and α can be obtained from the product of gamma distributions. This prior distribution is given by

π G ϕ , μ , α ϕ a 1 - 1 μ a 2 - 1 α a 3 - 1 exp - b 1 ϕ - b 2 μ - b 3 α . (6)

From the product of the likelihood function (2) and the prior distribution (6), the joint posterior distribution for ϕ, μ and α is given by,

p G ϕ , μ , α | t α n + a 3 - 1 μ n α ϕ + a 2 - 1 ϕ a 1 - 1 Γ ϕ n i = 1 n t i α ϕ - 1 e x p - μ α i = 1 n t i α × × e x p - b 1 ϕ - b 2 μ - b 3 α . (7)

The conditional posterior distributions for ϕ, μ and α needed for the Gibbs sampling algorithm are given as follows:

p G α | ϕ , μ , t α n + a 3 - 1 μ n α ϕ + a 2 - 1 i = 1 n t i α ϕ - 1 exp - b 3 α - μ α i = 1 n t i α , p G ϕ | μ , α , t μ n α ϕ + a 2 - 1 ϕ a 1 - 1 Γ ( ϕ ) n i = 1 n t i α ϕ - 1 exp - b 1 ϕ , p G μ | ϕ , α , t μ n α ϕ + a 2 - 1 exp - b 2 μ - μ α i = 1 n t i α . (8)

Considering the presence of right censored observations, the joint posterior distribution for ϕ, μ and α, is proportional to the product of the likelihood function (4) and the prior distribution (6), resulting in

p G ϕ , μ , α | t , δ α d + a 3 - 1 μ d α ϕ + a 2 - 1 ϕ a 1 - 1 Γ ( ϕ ) n i = 1 n t i δ i ( α ϕ - 1 ) i = 1 n Γ [ ϕ , ( μ t i ) α ] 1 - δ i × × exp - b 1 ϕ - b 2 μ - b 3 α - μ α i = 1 n δ i t i α . (9)

The conditional posterior distributions for ϕ, μ and α needed for the Gibbs sampling algorithm are given as follows:

p G α | ϕ , μ , t , δ α d + a 3 - 1 μ d α ϕ + a 2 - 1 i = 1 n t i δ i ( α ϕ ) i = 1 n Γ [ ϕ , ( μ t i ) α ] 1 - δ i × × exp - b 3 α - μ α i = 1 n δ i t i α , p G ϕ | μ , α , t , δ μ d α ϕ + a 2 - 1 ϕ a 1 - 1 Γ ( ϕ ) n i = 1 n t i δ i ( α ϕ ) i = 1 n Γ [ ϕ , ( μ t i ) α ] 1 - δ i e - b 1 ϕ , p G μ | ϕ , α , t , δ μ d α ϕ + a 2 - 1 i = 1 n Γ [ ϕ , ( μ t i ) α ] 1 - δ i exp - b 2 μ - μ α i = 1 n δ i t i α . (10)

Using prior opinion of experts or from the data (empirical Bayesian methods), the hyperparameters of the gamma prior distributions can be elicited using the method of moments. Let λi and σi2, respectively, be the mean and variance of θi , θi ∼ Gamma(ai , bi ), then

a i = λ i 2 σ i 2 a n d b i = λ i σ i 2 , i = 1 , , 3 . (11)

4 USEFUL EQUATIONS TO GET INITIAL VALUES

In this section, an exploratory technique is discussed to obtain good initial values for numerical procedure used to obtain the MLEs for GG distribution parameters. These initial values can also be used to elicit empirical prior distributions for the parameters (use of empirical Bayesian methods).

Although different non-informative prior distributions could be considered for the GG distribution as the Jeffreys’ prior or the reference prior 1010 KASS RE & WASSERMAN L. 1996. The selection of prior distributions by formal rules. Journal of the American Statistical Association, 91(435): 1343-1370., such priors involve transcendental function in ϕ such as digamma and trigamma functions with does not allow us to obtain closed-form estimator for ϕ. A simple non-informative prior distribution representing the lack of information on the parameters can be obtained by using Jeffreys’ rule 1010 KASS RE & WASSERMAN L. 1996. The selection of prior distributions by formal rules. Journal of the American Statistical Association, 91(435): 1343-1370.. As the GG distribution parameters are contained in positive intervals (0, ∞), using Jeffreys’ rule the the following prior is obtained

π R ϕ , μ , α 1 ϕ μ α . (12)

The joint posterior distribution for ϕ, μ and α, using Jeffreys’ rule is proportional to the product of the likelihood function (2) and the prior distribution (12), resulting in,

p R ( ϕ , μ , α | t ) = 1 d 1 ( t ) α n - 1 ϕ Γ ( ϕ ) n μ n α ϕ - 1 i = 1 n t i α ϕ - 1 exp - μ α i = 1 n t i α . (13)

However, the posterior density (13) is improper, i.e.,

d 1 ( t ) = A α n - 1 ϕ Γ ( ϕ ) n μ n α ϕ - 1 i = 1 n t i α ϕ - 1 exp - μ α i = 1 n t i α d θ =

where θ = (ϕ, μ, α) and 𝒜 = {(0, ∞) × (0, ∞) × (0, ∞)} is the parameter space of θ.

Proof. Since αn-1ϕΓ(ϕ)nμnαϕ-1i=1ntiαϕ-1exp-μαi=1ntiα ≥ 0 by Tonelli theorem (see Folland 66 FOLLAND GB. 2013. Real analysis: modern techniques and their applications. John Wiley & Sons.) we have

d 1 ( t ) = A α n - 1 ϕ Γ ( ϕ ) n μ n α ϕ - 1 i = 1 n t i α ϕ - 1 exp - μ α i = 1 n t i α d θ = 0 0 0 α n - 1 ϕ Γ ( ϕ ) n μ n α ϕ - 1 i = 1 n t i α ϕ - 1 exp - μ α i = 1 n t i α d μ d ϕ d α = 0 0 α n - 2 Γ ( n ϕ ) ϕ Γ ( ϕ ) n i = 1 n t i α ϕ - 1 i = 1 n t i α n ϕ d ϕ d α 0 0 1 c 1 α n - 2 ϕ n - 2 i = 1 n t i α n i = 1 n t i α n ϕ d ϕ d α = 0 c 1 α n - 2 γ n - 1 , n q ( α ) n q ( α ) n - 1 d α 1 g 1 1 α d α = ,

where

q ( α ) = log i = 1 n t i α i = 1 n t i α n > 0

and c 1 and g 1 are positive constants such that the above inequalities occur. For more details and proof of the existence of these constants, see Ramos 1919 RAMOS PL. 2014. Aspectos computacionais para inferência na distribuição gama generalizada. Dissertação de Mestrado, 165 p.. □

Since the Jeffreys’ rule prior returned an improper posterior, a modification in this prior was considered. This modified prior is given by

π M ϕ , μ , α 1 ϕ μ α 1 2 + α 1 + α · (14)

The present modification had two main objectives. First was to produce a modified prior for π(α) that behave similar to π(α) ∝ α -1 (see Figure 1). Second, was to construct a joint prior distribution that yields a proper posterior distribution.

Figure 1
Plot of the Jeffreys’ Rule and the new Prior considering different values for α.

The joint posterior distribution for ϕ, μ and α, using the prior distribution (14) is proportional to the product of the likelihood function (2) and the prior distribution (14) resulting in,

p M ( ϕ , μ , α | t ) = 1 d 2 ( t ) α n - 1 2 - α / ( 1 + α ) ϕ Γ ( ϕ ) n μ n α ϕ - 1 i = 1 n t i α ϕ - 1 exp - μ α i = 1 n t i α . (15)

This joint posterior distribution is proper, i.e.,

d 2 ( t ) = A α n - 1 2 - α / ( 1 + α ) ϕ Γ ( ϕ ) n μ n α ϕ - 1 i = 1 n t i α ϕ - 1 exp - μ α i = 1 n t i α d θ < , (16)

where θ = (ϕ, μ, α) and 𝒜 = {(0, ∞) × (0, ∞) × (0, ∞)} is the parameter space of θ . The proof of this result is presented in Appendix A APPENDIX A Proof. Since αn-12-α/(1+α)ϕΓ(ϕ)nμnαϕ-1∏i=1ntiαϕ-1exp-μα∑i=1ntiα ≥ 0 by Tonelli theorem 6 we have d 2 = ∫ 0 ∞ ∫ 0 ∞ ∫ 0 ∞ α n - 1 2 - α / ( 1 + α ) ϕ Γ ( ϕ ) n μ n α ϕ - 1 ∏ i = 1 n t i α ϕ - 1 exp - μ α ∑ i = 1 n t i α d μ d ϕ d α = = ∫ 0 ∞ ∫ 0 ∞ α n - 3 2 - α ( 1 + α ) ϕ Γ ( ϕ ) n ∏ i = 1 n t i α ϕ - 1 Γ ( n ϕ ) ∑ i = 1 n t i α n ϕ d ϕ d α = s 1 + s 2 + s 3 + s 4 , where s 1 = ∫ 0 1 ∫ 0 1 α n - 3 2 - α ( 1 + α ) ϕ Γ ( ϕ ) n ∏ i = 1 n t i α ϕ - 1 Γ ( n ϕ ) ∑ i = 1 n t i α n ϕ d ϕ d α < ∫ 0 1 c ' 1 α n - 3 2 - α ( 1 + α ) γ ( n - 1 , n q ( α ) ) ( n q ( α ) ) n - 1 d α < ∫ 0 1 g ' 1 α n - 3 2 d α < ∞ . s 2 = ∫ 1 ∞ ∫ 0 1 α n - 3 2 - α ( 1 + α ) ϕ Γ ( ϕ ) n ∏ i = 1 n t i α ϕ - 1 Γ ( n ϕ ) ∑ i = 1 n t i α n ϕ d ϕ d α < ∫ 1 ∞ c ' 1 α n - 3 2 - α ( 1 + α ) γ ( n - 1 , n q ( α ) ) ( n q ( α ) ) n - 1 d α < ∫ 1 ∞ g ' 2 α - 3 2 d α < ∞ . s 3 = ∫ 0 1 ∫ 1 ∞ α n - 3 2 - α ( 1 + α ) ϕ Γ ( ϕ ) n ∏ i = 1 n t i α ϕ - 1 Γ ( n ϕ ) ∑ i = 1 n t i α n ϕ d ϕ d α , < ∫ 0 1 c 2 a n - 3 2 - α 1 + α Γ ( n - 1 2 , n p ( α ) ) ( n p ( α ) ) n - 1 2 d α < ∫ 0 1 g ' 3 α - 1 2 d α < ∞ . s 4 = ∫ 1 ∞ ∫ 1 ∞ α n - 3 2 - α ( 1 + α ) ϕ Γ ( ϕ ) n ∏ i = 1 n t i α ϕ - 1 Γ ( n ϕ ) ∑ i = 1 n t i α n ϕ d ϕ d α < ∫ 1 ∞ c 2 a n - 3 2 - α 1 + α Γ ( n - 1 2 , n p ( α ) ) ( n p ( α ) ) n - 1 2 d α < ∫ 1 ∞ g ' 4 α - 3 2 d α < ∞ . where c' 1, c' 2, g' 1, g' 2, g' 3 and g' 4 are positive constants such that the above inequalities occur. For more details and proof of the existence of these constants, see Ramos 19. Therefore, we have: d 4 = s 1 + s 2 + s 3 + s 4 < ∞. □ at the end of the paper.

The marginal posterior distribution for α and ϕ is obtained by integrating (15) with respect to μ, i.e.,

p M ( ϕ , α | t ) = 1 d 2 ( t ) α n - 1 2 - α / ( 1 + α ) ϕ Γ ( ϕ ) n i = 1 n t i α ϕ - 1 0 μ n α ϕ - 1 exp - μ α i = 1 n t i α d μ .

From the result

0 x α - 1 e - β x d x = Γ ( α ) β α

and considering the transformation v = μα , we have

0 μ n α ϕ - 1 exp - μ α i = 1 n t i α d μ = 1 α 0 v n ϕ - 1 exp - v i = 1 n t i α d v = Γ ( n ϕ ) α i = 1 n t i α n ϕ .

Therefore, the marginal posterior distribution of α and ϕ is given by

p M ( ϕ , α | t ) = 1 d 2 ( t ) α n - 3 2 - α / ( 1 + α ) ϕ Γ ( ϕ ) n i = 1 n t i α ϕ - 1 Γ ( n ϕ ) i = 1 n t i α n ϕ .

The logarithm of the above marginal distribution is given by

log p M ( α , ϕ | t ) = n - 3 2 - α ( 1 + α ) log ( α ) + log Γ ( n ϕ ) - log ( ϕ ) - log ( d 2 ) + ( α ϕ - 1 ) i = 1 n log ( t i ) - n log Γ ( ϕ ) - n ϕ log i = 1 n t i α .

The first-order partial derivatives of log (pM (α, ϕ| t )), with respect to α and ϕ, are given, respectively, by,

log p M ( α , ϕ | t ) α = κ ( α ) + ϕ i = 1 n log ( t i ) - n ϕ i = 1 n t i α log ( t i ) i = 1 n t i α , log p M ( α , ϕ | t ) ϕ = n ψ ( n ϕ ) + α i = 1 n log ( t i ) - 1 ϕ - n ψ ( ϕ ) - n log i = 1 n t i α ,

where κ(α) = n-32-α(1+α)1α-α(1+α)2 log(α). The maximum a posteriori estimates of ϕ~ and α~ related to pM (α, ϕ| t) are obtained by solving

log p M ( α , ϕ | t ) α = 0 and log p M ( α , ϕ | t ) ϕ = 0 .

By solving log (pM (α, ϕ| t )/α, we have

ϕ ~ = κ ( α ~ ) i = 1 n t i α ~ n i = 1 n t i α ~ log ( t i α ~ ) - i = 1 n t i α ~ i = 1 n log ( t i α ~ ) · (17)

By solving log (pM (α, ϕ| t )/ϕ and replacing ϕ by ϕ~ we have α~ can be obtained finding the minimum of

h ( α ) = n ψ n ϕ ~ + α i = 1 n log ( t i ) - 1 ϕ ~ - ψ ϕ ~ - log i = 1 n t i α (18)

Using one dimensional iterativemethods, we obtain the value of α~. Replacing α~ in equation (17), we have ϕ~. The preliminary estimate μ can be obtained from

log p M ( ϕ , μ , α | t ) μ = ( n ϕ ~ α ~ - 1 ) - α ~ μ ~ α ~ i = 1 n t i α ~ = 0

and after some algebraic manipulation,

μ ~ = n α ~ ϕ ~ - 1 α ~ i = 1 n t i α ~ 1 α ~ , if n α ~ ϕ ~ > 1 n ϕ ~ i = 1 n t i α ~ 1 α ~ , if n α ~ ϕ ~ < 1 (19)

This approach can be easily used to find good initial values ϕ~, μ~ and α~. These results are of great practical interest since it can be used in a Bayesian analysis to construct empirical prior distributions for ϕ, μ and α as well as for initialization of the iterative methods to find the MLEs of the GG distribution parameters.

5 A NUMERICAL ANALYSIS

In this section, a simulation study via Monte Carlo method is performed in order to study the effect of the initial values obtained from (17, 18 and 19) in the MLEs and posterior estimates, considering complete and censored data. In addition, MCMC (Markov Chain Monte Carlo) methods are used to obtain the posterior summaries. The influence of sample sizes on the accuracy of the obtained estimates was also investigated. The following procedure was adopted in the study:

  1. Set the values of θ = (ϕ, μ, α) and n.

  2. Generate values of the GG(ϕ, μ, α) distribution with size n.

  3. Using the values obtained in step (2), calculate the estimates of ϕ^,μ^ and α^ using the MLEs or the Bayesian inference for the parameters ϕ, μ and α.

  4. Repeat (2) and (3)N times.

The chosen values for this simulation study are θ = ((0.5, 0.5, 3), (0.4, 1.5, 5)) and n = (50, 100, 200). The seed used to generate the random values in the R software is 2013. The comparison between the methods was made through the calculation of the averages and standard deviations of the N estimates obtained through the MLEs and posterior means. It is expected that the best estimation method has the means of the N estimates closer to the true values of θ with smaller standard deviations. It is important to point out that, the results of this simulation study were similar for different values of θ .

5.1 A classical analysis

In the lack of information on which initial values should be used in the iterative method, we generate random values such that ϕ~ ∼ U(0, 5), μ~ ∼ U(0, 5) and α~ ∼ U(0, 5). This procedure is denoted as “Method 1”. On the other hand, from equations (17, 18 and 19), good initial values are easily obtained to be used in iterative methods to get the MLEs considering complete observations. In the presence of censored observations we discuss the possibility of using only the complete observations, in order to obtain good initial values ϕ~, μ~ and α~. This procedure is denoted as “Method 2”.

Tables 1-3 present the means and standard deviations of the estimates of N = 50, 000 samples obtained using the MLEs, calculated by the methods 1 and 2 for different values of θ and n, using complete and censored observations. We also have the coverage probability (CP) with a nominal 95% confidence level.

Table 1
Estimates of the means and standard deviations of MLE’s and coverage probabilities with a nominal 95% confidence level obtained from 50, 000 samples using Method 1 for different values of θ and n, using complete observations as well as censored observations (20% censoring).

Table 2
Estimates of the means and standard deviations of MLE’s and coverage probabilities obtained from 50, 000 samples using Method 2 for different values of θ and n, using complete observations as well as censored observations (20% censoring).

Table 3
95% confidence level obtained from 50, 000 samples using Method 2 for different values of θ and n, using complete observations as well as censored observations (20% censoring).

We can observe in the results shown in Tables 1 to 3, that the MLEs are strongly affected by the initial values. For example, using method 1, for μ = 0.5and n = 50, the average obtained from N = 50, 000 estimates of μ is equal to 1.816, with standard deviation equals to 1.973. Other problem using such methodology is that the coverage probability tends to decrease even considering large sample sizes.

Using our proposed methodology (method 2) it can be seen that in all cases the method gives good estimates for ϕ, α and μ. The coverage probabilities of parameters show that, as the sample size increase the intervals tend to 95% as expected. Therefore, we can easily obtain good inferences for the parameters of the GG distribution for both, complete or censored data sets.

5.2 A Bayesian analysis

From the Bayesian approach, we can obtain informative hyperparameter for the gamma prior distribution using the method of moments (11) with mean given by λ = (ϕ~,μ~,α~) obtained using the equations (17-19). Additional, we have assumed the variance given by σθ2 = (1, 1, 1),σθ2 = (ϕ~,μ~,α~) and σθ2 = (10, 10, 10) to verify which of these values produce better results. The procedure was replicated considering N = 1, 000 for different values of the parameters.

The OpenBUGS was used to generate chains of the marginal posterior distributions of ϕ, μ and α via MCMC methods. The code is given by

model<-function () {

for (i in 1:N) {

x[i] ~ dggamma(phi,mu,alpha)

}

phi ~ dgamma(phi1,phi2)

mu ~ dgamma(mi1,mi2)

alpha~ dgamma(alpha1,alpha2)

}

In the case of censored data, we only have to change dggamma(phi,mu,alpha) in the code above by dggamma(phi,mu,alpha)I(delta[i],). The hyperparameters phi1,phi2,... , alpha2 are obtained by our initial values. The main advantage of using the OpenBUGS is due its simplicity, here, the proposal distribution for generating the marginal distributions are defined by the program according to its structure. Therefore we only have to set the data to be fitted. For each simulated sample 8, 500 iterations were performed. As “burn-in samples”, we discarded the 1000 initial values. To decrease the autocorrelations bettwen the samples in each chain, the thin considered was 15, obtaining at the end three chains of size 500, these results were used to obtain the posterior summaries for ϕ, μ and α. The convergence of the Gibbs sampling algorithm was confirmed by the Geweke criterion 77 GEWEKE J ET AL. 1991. Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments, vol. 196. Federal Reserve Bank of Minneapolis, Research Department Minneapolis, MN, USA. under a 95% confidence level.

To illustrated the importance of good initial values and informative priors, firstly we considered a simulation study using flat priors, i.e., prior distributions that have large variance, here we assume that θi ∼Uniforme (0, 40), i = 1, ..., 3 or θi ∼ Gamma (0.01, 0.01). Table 4 displays the means, standard deviations of the posterior means from 1000 samples obtained using the flat priors calculated for different values of θ and n.

Table 4
Means and standard deviations from initial values and posterior means subsequently obtained from 1000 simulated samples with different values of θ and n, using the gamma prior distribution with θi ∼ Uniform (0, 40) and θi ∼ Gamma (0.01, 0.01).

From this table, we observe that flat priors with vague information may affect the posterior estimates, specially for small sample sizes, for instance in the case of n = 50 and α = 3.0 we obtained the mean of the estimates given by 8.762 with standard deviation 44.485 which is undesirable. On the other hand, Tables 5-6 display the means, standard deviations and the coverage probabilities of the estimates of 1000 samples obtained using the initial values (IV) and the Bayesian estimates (BE) of the posterior means, calculated for different values of θ and n, using complete data.

Table 5
Coverage probabilities with a confidence level of 95% of the estimates of posterior means obtained from 1000 simulated samples of size n = (50, 100, 200), with different values of θ , using the gamma prior distribution with λ=(ϕ~,μ~,α~),σθ2=(1,1,1),σθ2=(ϕ~,μ~,α~) and σθ2 = (10, 10, 10) and complete observations.

Table 6
Means and standard deviations from initial values and posterior means subsequently obtained from 1000 simulated samples with different values of θ and n, using the gamma prior distribution with λ=(ϕ~,μ~,α~),σθ2=(1,1,1),σθ2=(ϕ~,μ~,α~) and σθ2 = (10, 10, 10) assuming complete observations.

We observed from Tables 5-6 that using the values calculated from equations (17-19) in the method of moments (11) with mean λ = θ~ and with variance σθ2=θ~ to get the hyperparameter values of the gamma distribution, we obtained very good posterior summaries for ϕ, μ and α, under a Bayesian approach.

The parameters estimation were also considered under censored observations (20% of censoring). Here, we used the same procedures described in Martinez et al. 1313 MARTINEZ EZ, ACHCAR JA, DE OLIVEIRA PERES MV & DE QUEIROZ JAM. 2016. A brief note on the simulation of survival data with a desired percentage of right-censored datas. Journal of Data Science, 14(4): 701-712. to generate the pseudo random samples under the same conditions described in the beginning of this section. Tables 7-8 display the means, standard deviations and the coverage probabilities of the estimates of 1000 samples obtained using the initial values and the Bayes estimates of the posterior means, calculated for different values of θ and n, using censored observations (20% censoring).

Table 7
Coverage probabilities with a confidence level of 95% of the estimates of posterior means obtained from 1000 simulated samples of size n = (50, 100, 200), with different values of θ , using the gamma prior distribution with λ=(ϕ~,μ~,α~),σθ2=(1,1,1),σθ2=(ϕ~,μ~,α~) and σθ2 = (10, 10, 10)and censored observations (20% of censoring).

Table 8
Means and standard deviations from initial values and posterior means subsequently obtained from 1000 simulated samples with different values of θ and n, using the gamma prior distribution with λ=(ϕ~,μ~,α~),σθ2=(1,1,1),σθ2=(ϕ~,μ~,α~) and σθ2 = (10, 10, 10) assuming censored data (20% censoring).

From Tables 7-8 that using the values calculated from equations (17-19) in the method of moments (11) with mean λ = θ~ and with variance σθ2=θ~ are usefull to get the hyperparameter values of the gamma distribution under censored data allowing us to obtain good posterior summaries for ϕ, μ and α, under a Bayesian approach.

6 REAL DATA APPLICATIONS

In this section, the proposed methodology is applied using two data sets from the literature. The GG distribution is assumed to analyze these data sets and the obtained results are compared with other models such as the Weibull, Gamma and Lognormal distributions, using the Akaike information criterion (AIC = −2l(α^; x ) +2k), corrected Akaike information criterion (AICc = AIC + (2 k (k + 1))/(nk − 1)) and the Bayesian information criterion (BIC = −2l(α^; x )+ k log(n)), where k is the number of parameters to be fitted and α^ is the estimate of α . In all cases, Bayesian approach is used to obtain the estimates of the parameters for the different distributions. Addionaly, we assume gamma priors with hyperparameters values equal to 0.1 for the parameters of the Weibull, Gamma and Lognormal distributions. For each case, 30, 000 Gibbs samples were simulated using MCMC methods taking every 15th generated sample obtaining a final sample of size 2, 000 to be used to get the posterior summaries.

6.1 Remission times of patients with cancer

In this section, we consider a data set that represents the remission times (in months) of a random sample of 128 bladder cancer patients 1212 LEE ET & WANG J. 2003. Statistical methods for survival data analysis, vol. 476. John Wiley & Sons.. This data set does not has censored values (see Table 9).

From equations (17-19), we have ϕ~ = 0.5665,μ~ = 0.2629 and α~ = 1.4669. Using (11) and assuming as mean λ = θ~ and variance σθ2=θ~, we obtain the priors ϕ ∼ Gamma(0.5665, 1), μ ∼ Gamma(0.2629, 1) and α ∼ Gamma(1.4669, 1). The posterior summaries obtained from MCMC methods are given in Table 10.

Table 9
Remission times (in months) of a random sample of 128 bladder cancer patients.

Table 10
Posterior medians, standard deviations and 95% credibility intervals for ϕ, μ and α.

Figure 2 shows the survival function fitted by the different probability distributions and the empirical survival function. Table 10 presents the results of the AIC, AICc and BIC criteria for the different probability distributions, considering the blader cancer data introduced in Table 8.

Figure 2
Survival function fitted by the empirical and by different p.d.f considering the bladder cancer data set and the hazard function fitted by a GG distribution.

Based on the AIC and AICc criteria, we concluded from the results of Table 11 that the GG distribution has the best fit for the bladder cancer data.

Table 11
Results of AIC, AICc and BIC criteria for different probability distributions considering the bladder cancer data set introduced in Table 8.

6.2 Data from an industrial experiment

In this section, we considered a lifetime data set related to the cycles to failure for a batch of 60 electrical appliances in a life test introduced by Lawless 1111 LAWLESS JF. 2011. Statistical models and methods for lifetime data, vol. 362. John Wiley & Sons. (+ indicates the presence of censorship).

From equations (17-19), we have ϕ~ = 0.5090,μ~ = 0.2746 and α~ = 1.7725. Using (11) and assuming as mean λ = θ~ and variance σθ2=θ~, we elicit the priors ϕ ∼ Gamma(0.5090, 1), μ ∼ Gamma(0.2746, 1) and α ∼ Gamma(1.7725, 1). The posterior summaries obtained from MCMC methods are given in Table 13.

Table 12
Data set of cycles to failure for a group of 60 electrical appliances in a life test (+ indicates the presence of censorship).

Table 13
Posterior medians, standard deviations and 95% credibility intervals for ϕ, μ and α.

Figure 3 shows the survival function fitted by probability distributions and the empirical survival function. Table 14 presents the results of the AIC, AICc and BIC criteria for different probability distributions, considering the industrial failure data set introduced in Table 12.

Figure 3
Survival function fitted by the empirical and different p.d.f considering the data set related to the industrial data set and the hazard function fitted by a GG distribution.

Table 14
Results of DIC criteria, AIC and BIC for different probability distributions considering the data of cycles to failure introduced in Table 11.

Based on the AIC and AICc criteria, we concluded from Table 14 that the GG distribution has the best fit for the industrial data. Additionally, the GG distribution is the only one between probability distributions assumed that allows bathtub shape hazard. This is an important point for the use of the GG distribution in applications.

7 CONCLUDING REMARKS

The generalized gamma distribution has played an important role as lifetime data, providing great flexibility of fit. In this paper, we showed that under the classical and Bayesian approaches, the parameter estimates usually do not exhibit stable performance in which can lead, in many cases, to different results.

This problem is overcome by proposing some empirical exploration methods aiming to obtain good initial values to be used in iterative procedures to find MLEs for the parameters of the GG distribution. These values were also used to elicit empirical prior distributions (use of empirical Bayesian methods). These results are of great practical interest since the GG distribution can be easily used as appropriated model in different applications.

ACKNOWLEDGEMENTS

The authors are very grateful to the Editor and reviewers for their helpful and useful comments that improved the manuscript.

REFERENCES

  • 1
    AHSANULLAH M, MASWADAH M & ALI MS. 2013. Kernel inference on the generalized gamma distribution based on generalized order statistics. Journal of Statistical Theory and Applications, 12(2): 152-172.
  • 2
    MOALA AF, RAMOS LP & ACHCAR AJ. 2013. Bayesian inference for two-parameter gamma distribution assuming different noninformative priors. Revista Colombiana de Estadística, 36(2): 319-336.
  • 3
    ASHKAR F, BOBÉE B, LEROUX D & MORISETTE D. 1988. The generalized method of moments as applied to the generalized gamma distribution. Stochastic Hydrology and Hydraulics, 2(3): 161-174.
  • 4
    COX C, CHU H, SCHNEIDER MF & MUÑOZ A. 2007. Parametric survival analysis and taxonomy of hazard functions for the generalized gamma distribution. Statistics in Medicine, 26(23): 4352-4374.
  • 5
    DICICCIO T. 1987. Approximate inference for the generalized gamma distribution. Technometrics, 29(1): 33-40.
  • 6
    FOLLAND GB. 2013. Real analysis: modern techniques and their applications. John Wiley & Sons.
  • 7
    GEWEKE J ET AL. 1991. Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments, vol. 196. Federal Reserve Bank of Minneapolis, Research Department Minneapolis, MN, USA.
  • 8
    HAGER HW & BAIN LJ. 1970. Inferential procedures for the generalized gamma distribution. Journal of the American Statistical Association, 65(332): 1601-1609.
  • 9
    HUANG P-H & HWANG T-Y. 2006. On new moment estimation of parameters of the generalized gamma distribution using it’s characterization. Taiwanese Journal of Mathematics, 10: 1083-1093.
  • 10
    KASS RE & WASSERMAN L. 1996. The selection of prior distributions by formal rules. Journal of the American Statistical Association, 91(435): 1343-1370.
  • 11
    LAWLESS JF. 2011. Statistical models and methods for lifetime data, vol. 362. John Wiley & Sons.
  • 12
    LEE ET & WANG J. 2003. Statistical methods for survival data analysis, vol. 476. John Wiley & Sons.
  • 13
    MARTINEZ EZ, ACHCAR JA, DE OLIVEIRA PERES MV & DE QUEIROZ JAM. 2016. A brief note on the simulation of survival data with a desired percentage of right-censored datas. Journal of Data Science, 14(4): 701-712.
  • 14
    PARR VB & WEBSTER J. 1965. A method for discriminating between failure density functions used in reliability predictions. Technometrics, 7(1): 1-10.
  • 15
    PASCOA MA, ORTEGA EM & CORDEIRO GM. 2011. The kumaraswamy generalized gamma distribution with application in survival analysis. Statistical Methodology, 8(5): 411-433.
  • 16
    PHAM T. & ALMHANA J. 1995. The generalized gamma distribution: its hazard rate and stressstrength model. IEEE Transactions on Reliability, 44(3): 392-397.
  • 17
    PRENTICE RL. 1974. A log gamma model and its maximum likelihood estimation. Biometrika, 61(3): 539-544.
  • 18
    RAMOS P & LOUZADA F. 2016. The generalized weighted lindley distribution: Properties, estimation and applications. Cogent Mathematics, 3(1): 1256022.
  • 19
    RAMOS PL. 2014. Aspectos computacionais para inferência na distribuição gama generalizada. Dissertação de Mestrado, 165 p.
  • 20
    RAMOS PL, ACHCAR JA & RAMOS E. 2014. Método eficiente para calcular os estimadores de máxima verossimilhanca da distribuição gama generalizada. Rev. Bras. Biom., 32(2): 267-281.
  • 21
    RAMOS PL, LOUZADA F & RAMOS E. 2016. An efficient, closed-form map estimator for nakagami-m fading parameter. IEEE Communications Letters, 20(11): 2328-2331.
  • 22
    SHIN JW, CHANG J-H & KIM NS. 2005. Statistical modeling of speech signals based on generalized gamma distribution. IEEE Signal Processing Letters, 12(3): 258-261.
  • 23
    STACY EW. 1962. A generalization of the gamma distribution. The Annals of Mathematical Statistics, 1187-1192.
  • 24
    STACY EW & MIHRAM GA. 1965. Parameter estimation for a generalized gamma distribution. Technometrics, 7(3): 349-358.
  • 25
    WINGO DR. 1987. Computing maximum-likelihood parameter estimates of the generalized gamma distribution by numerical root isolation. IEEE Transactions on Reliability, 36(5): 586-590.

APPENDIX A

Proof. Since αn-12-α/(1+α)ϕΓ(ϕ)nμnαϕ-1i=1ntiαϕ-1exp-μαi=1ntiα ≥ 0 by Tonelli theorem 66 FOLLAND GB. 2013. Real analysis: modern techniques and their applications. John Wiley & Sons. we have

d 2 = 0 0 0 α n - 1 2 - α / ( 1 + α ) ϕ Γ ( ϕ ) n μ n α ϕ - 1 i = 1 n t i α ϕ - 1 exp - μ α i = 1 n t i α d μ d ϕ d α = = 0 0 α n - 3 2 - α ( 1 + α ) ϕ Γ ( ϕ ) n i = 1 n t i α ϕ - 1 Γ ( n ϕ ) i = 1 n t i α n ϕ d ϕ d α = s 1 + s 2 + s 3 + s 4 ,

where

s 1 = 0 1 0 1 α n - 3 2 - α ( 1 + α ) ϕ Γ ( ϕ ) n i = 1 n t i α ϕ - 1 Γ ( n ϕ ) i = 1 n t i α n ϕ d ϕ d α < 0 1 c ' 1 α n - 3 2 - α ( 1 + α ) γ ( n - 1 , n q ( α ) ) ( n q ( α ) ) n - 1 d α < 0 1 g ' 1 α n - 3 2 d α < . s 2 = 1 0 1 α n - 3 2 - α ( 1 + α ) ϕ Γ ( ϕ ) n i = 1 n t i α ϕ - 1 Γ ( n ϕ ) i = 1 n t i α n ϕ d ϕ d α < 1 c ' 1 α n - 3 2 - α ( 1 + α ) γ ( n - 1 , n q ( α ) ) ( n q ( α ) ) n - 1 d α < 1 g ' 2 α - 3 2 d α < . s 3 = 0 1 1 α n - 3 2 - α ( 1 + α ) ϕ Γ ( ϕ ) n i = 1 n t i α ϕ - 1 Γ ( n ϕ ) i = 1 n t i α n ϕ d ϕ d α , < 0 1 c 2 a n - 3 2 - α 1 + α Γ ( n - 1 2 , n p ( α ) ) ( n p ( α ) ) n - 1 2 d α < 0 1 g ' 3 α - 1 2 d α < . s 4 = 1 1 α n - 3 2 - α ( 1 + α ) ϕ Γ ( ϕ ) n i = 1 n t i α ϕ - 1 Γ ( n ϕ ) i = 1 n t i α n ϕ d ϕ d α < 1 c 2 a n - 3 2 - α 1 + α Γ ( n - 1 2 , n p ( α ) ) ( n p ( α ) ) n - 1 2 d α < 1 g ' 4 α - 3 2 d α < .

where c' 1, c' 2, g' 1, g' 2, g' 3 and g' 4 are positive constants such that the above inequalities occur. For more details and proof of the existence of these constants, see Ramos 1919 RAMOS PL. 2014. Aspectos computacionais para inferência na distribuição gama generalizada. Dissertação de Mestrado, 165 p.. Therefore, we have: d 4 = s 1 + s 2 + s 3 + s 4 < ∞. □

Publication Dates

  • Publication in this collection
    May-Aug 2017

History

  • Received
    27 Oct 2016
  • Accepted
    14 July 2017
Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 , Tel.: +55 21 2263-0499 - Rio de Janeiro - RJ - Brazil
E-mail: sobrapo@sobrapo.org.br