Abstract
In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.
ARCH models; Bayesian approach; MCMC methods
Comparison between the complete Bayesian method and empirical Bayesian method for ARCH models using Brazilian financial time series
Sandra C. OliveiraI, *; Marinho G. AndradeII
ICurso de Administração, Campus Experimental de Tupã - CET, Universidade Estadual Paulista - UNESP, Av. Domingos da Costa Lopes, 780, 17602-496 Tupã, SP, Brazil. E-mail: sandra@tupa.unesp.br
IIDepartamento de Matemática Aplicada e Estatística - SME, Instituto de Ciências Matematicas e de Computação -ICMC, Universidade de São Paulo - USP, Av. do Trabalhador Sãocarlense, 400, 13566-590 São Carlos, SP, Brazil. E-mail: marinho@icmc.usp.br
ABSTRACT
In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.
Keywords: ARCH models, Bayesian approach, MCMC methods.
1 INTRODUCTION
The dynamics of world financial markets requires increasingly sophisticated, complex and efficient models to describe the trends and characteristics of financial assets as accurately as possible.
The analysis of financial series shows they have a high rate of change of the conditional variance in time. The square root of this rate (standard deviation) is called volatility. Understanding how volatility changes over time is critical to the financial market, influencing the risk assessment of investments and asset pricing. It determines the degree of change of the price of the asset in the future, i.e., a low value implies small changes (low risk) in the future, whereas a high value implies significant variations (high risk) (Enders, 2009).
Let pt be the price of a given asset at time t, normally a business day. Suppose first that no dividend was paid during the period. The compounded continuous return or simply log-return is given by zt = In (pt/pt-1). This definition is commonly used in the literature and in this paper zt will be called simply return. In practice, it is preferable to work with returns which are free of scale than with prices, because the former are easier to model using techniques of time series analysis (Morettin, 2008).
The characterization of the statistical properties of financial return series is essential for a correct application of models to the data, allowing inferences about the characteristics of this return, especially with regard to the mean and variance, which determine the expected return and volatility forecast for the coming periods. Estimates of these quantities are fundamental to investment decisions not only in assets but also on their derivatives.
There are a large number of nonlinear models for estimating the volatility of financial asset return series. The most widespread in the literature are the autoregressive conditional heteroskedasticity (ARCH) models, proposed by Engle (1982), and their extension, the generalized ARCH (GARCH) models, proposed by Bollerslev (1986). These models feature a nonlinear dependence among the returns, according to the serial dependence of the conditional variance. A comprehensive review of the properties of these models can be found in Degiannakis & Xekalaki (2004) and Bollerslev (2008).
Since it is considered that the volatility at a given instant of time depends on past values of the series, the determination of maximum likelihood estimators (MLE) of the parameters of ARCH models requires maximizing a nonlinear function. Therefore, the estimates can only be obtained numerically. Engle (1982) suggested the use of Newton's method as an iterative method to calculate maximum likelihood estimates. This relaxes the restrictions on parameters (e.g., they must be positive and their sum must be less than one), which ensures stationary covariance. On the other hand, the determination of asymptotic properties of MLE with restrictions involves some difficulties and may lead to local maxima, since properties such as asymptotic normality do not allow restrictions (Barndorff-Nielsen & Cox, 1994). In addition, procedures for identification, diagnosis and adjustment of the models, as well as prediction values of econometric series, require properties of asymptotic theory. Because the models are very far from linear, the asymptotic properties of these estimators can only be verified for very long series and, in general, are more appropriate in the presence of symmetrical distribution of the errors and normal distribution of the data (Geweke, 1986).
In this context, the use of bootstrap methods can be considered to improve estimates of the parameters of models of the ARCH family. But the results of this approach can be severely affected in the MLE calculation, which is performed several times in the procedure and presents convergence difficulties, also due to the restrictions imposed on the parameters (Oliveira, 2005). An alternative to estimate these models, bypassing these difficulties, is to consider a Bayesian approach.
To the best of our knowledge, Geweke (1989b) was the first author to propose a Bayesian approach for ARCH models, in which a particular case of reparameterization allowed the use of non-informative prior distributions. The parameters were obtained using algorithms of the Monte Carlo simulation. In a subsequent work, a Bayesian semi-nonparametric approach was proposed for ARCH models by Koop (1994), who used the advantages of Geweke's Bayesian methodology and also estimated the model in a semi-nonparametric way. A Bayesian approach was proposed for GARCH models specifically, within the class of dynamic models, in Migon & Mazucheli (1999). Nakatsuma (2000) proposed a Markov Chain Monte Carlo (MCMC) method for linear regression models with ARMA-GARCH errors. He used normal prior and Metropolis-Hastings algorithm simulation for the model parameters to generate the posterior distribution. Polasek & Kozumi (2000) and Polasek (2001) developed a Bayesian approach with hierarchical structure for VAR-VARCH (Vector-AR-Vector-ARCH) and PAR-ARCH (Periodic-AR-ARCH) models, respectively, using MCMC. In the context of unobserved components models, Giakoumatos et al. (2005) proposed a Bayesian approach for ARCH models by using auxiliary variables (Pitt & Walker, 2005). More recently, Ausin & Galeano (2007) proposed a Bayesian approach for GARCH models with Gaussian mixture errors, and Barreto et al. (2008) compared the Bayesian and maximum likelihood methods using simulated series, according to ARCH processes with different orders and under conditions of finite and infinite variances. Finally, Andrade & Oliveira (2011) presented a Bayesian approach for ARCH models using the normal prior for their parameters and compared credibility intervals with bootstrap intervals. Therefore, given the importance of the subject, in this paper we propose a complete Bayesian method and an empirical Bayesian method for estimating the parameters of ARCH processes, in order to compare the estimates obtained by those methods.
In the complete approach, we considered the prior distribution based on the non-informative proposal of Geweke (1989b) and used MCMC simulation methods to obtain posterior summaries. In the empirical approach, we partitioned the data set considering a Bayesian analysis for the first part of the data, using the non-informative prior distribution of Geweke (1989b) and MCMC simulation methods. With the information obtained from that first step, we defined an informative prior distribution, which was combined with the second part of the data, resulting in posterior summaries.
We also considered a reparameterization of those models to reduce the rejection rate of the MCMC simulation algorithm and accelerate the convergence process. The proposed methodology was used to model a series of daily prices of Telebras shares traded on the São Paulo Stock Exchange (Bovespa).
2 ARCH(q) MODELS
The regression model proposed by Engle (1982) with zero mean, expressed as a linear combination of exogenous variables, has a structure that can be summarized as:
where zt represents a return series, P (.) is a parametric distribution, usually normal or Student-t and Ωt-1= {zt-1, zt-2,...} represents a set of information available at t 1. Let zt be a process that satisfies the model
In this work we assumed that {εt, t > 0} is a sequence of white noise i.i.d. N (0, 1).
For the model (1)-(3) to be plausible (ht > 0 for all t), we must have α0> 0 and αj> 0, j = 1,..., q. Furthermore, the process zt has finite variance, and therefore has stationary covariance if and only if all the roots of the polynomial are outside the unit circle, where l is the delay operator, such that ljzt = zt j. When this condition is satisfied, it can be shown that the unconditional variance of zt is given by
Therefore the sufficient condition for this process to have stationary covariance is .
3 LIKELIHOOD FUNCTION
Let = {z1, Z2,..., zT} be an observed trajectory of the return zt. Assuming that the first q observations = {z1, Z2,..., zq} are known, the likelihood function of the observations = {zq+1, zq+2, ..., zT}, conditional on , is given by
where α = [α0α1 ... αq]' is the vector of parameters of the ARCH(q) model and ht = α0 + .
Thus, maximum likelihood estimators for the parameter vector α can be obtained by means of maximizing the likelihood function . It is worth noting that the use of the conditional likelihood function instead of the exact likelihood function can be done without great loss of precision in the estimates when the series zt is large. This approximation is generally considered due to the great practical advantages in calculating the estimators (Box et al., 1994).
4 BAYESIAN INFERENCE FOR ARCH(q) MODELS
The Bayesian approach for inferring the parameters of ARCH(q) models starts from the combination of the likelihood function, , with an a prior probability density for the parameters, π(α), by means of Bayes' rule (Gelman et al., 1995):
The expression is called the a posterior probability density of α and tells how the random variable α will be distributed after the data have been observed.
In sections 4.1 and 4.2 we propose two different approaches for Bayesian inference for the parameters α of ARCH(q) models.
4.1 Complete Bayesian model (CBM) with non-informative prior
Bayesian analysis requires the specification of an a prior density for π(α) to reflect the prior knowledge about the distribution of π(α). When there is little or no prior knowledge about the distribution of a model's parameters, one must adopt a non-informative prior density. In this work we adopted a non-informative prior density for the parameters αof ARCH(q) models based on the proposal of Geweke (1989b), defined as:
Depending on αj = 0, j = 1, ...,q, this is the invariant prior density of Jeffreys (Box & Tiao, 1973) for the normal linear regression model. We also consider a reparameterization, which consists of:
The values of aj and bj can be chosen based on some prior information, for example, previous studies on the series analyzed. This reparameterization leads to the choice of a transformation that maps the intervals (-∞, +∞) in the domain (aj, bj) and vice versa. Accordingly, we reduced the rejection rate of the MCMC simulation algorithm, accelerating its convergence. Using the reparametrization defined in (7), we have:
where:
with αj, j = 0, 1, 2, ..., q given by (7).
Combining the expressions (4) and (8), we write the posterior joint density for the φ as:
We note that the posterior density (9) has a form that only resembles the known probability density functions, which prevents the calculation of the quantities of interest of the parameters αj, j = 0, 1,..., q, such as mean, mode, median, standard deviation, among others (Kotz et al., 2000).
Since the adjustment of ARCH processes to financial series generally are of high orders, the use of numerical integration methods would require large computational effort and have low accuracy. This problem can be solved by using Monte Carlo Markov chain (MCMC) simulation methods, specifically the Metropolis-Hastings algorithm. The methods based for this estimation are preferable because they can be generalized to more complex models. Other algorithms could be used (e.g., acceptance/rejection, importance sampling, among others), but the Metropolis-Hastings is one of those requiring the least computational effort (Gilks et al., 1996).
4.2 Empirical Bayes Model (EBM)
The empirical Bayesian methods are distinguished by using the observed data to estimate the parameters of the prior and avoid the difficulties of determining the distribution, either due to lack of knowledge about the variable of interest or some subjectivity involved in choosing it. The empirical method can be viewed as an approximation of the complete hierarchical Bayesian analysis (Gelman et al., 1995).
In this work we consider a partition of the return series = {z1, z2, ..., zT}, such that
and we propose a Bayesian framework composed of two steps.
In the first step we must perform an analysis with the first part of the series of observed data, , assuming a non-informative prior density, if possible, π1 (α), to make inferences about the parameter vector α = [α0α1 ... αq1]'. We thus propose the following Bayesian model:
where = {z1, z2, ..., zq1} and = {zq1 +1, zq1 +2, ..., zT1}. This procedure provides posterior estimates for a through the density π = .
These estimates are used for inference about the density of α* = [α0α1 ... αq2]', π2(α*), which is used as an informative prior density for the second stage of the analysis (main analysis), where we combine the likelihood function constructed from the second part of the series of observed data, , with the prior density π(α*), leading to the posterior density, given by:
where
In this paper, we partition the data set so that the first half of the data series (i.e., observations) are used in the first stage of the Bayesian analysis and the second half of the data series are used in the second stage (main analysis). The choice of T1 requires careful analysis of the sensitivity of the parameter estimates in function of the sample size. A minimum training sample is always desirable. However, the computational effort is very great for this choice. Further details and discussions about this procedure can be found in Berger & Pericchi (2004).
For the Bayesian analysis of the first step, we consider non-informative prior density of Geweke (1989b), defined in (6)-(8), assuming little or no prior knowledge about the distribution of the model's parameters. Assuming that the first q1 observations are known, the conditional likelihood function is defined by means of the volatility ht rewritten as a function of the parameters αj, j = 0, 1, ..., q1, transformed into Φj, j = 0, 1,..., q1, by (7), i.e.:
where φ= [Φ0Φ1 ... Φq1]'. Combining expressions (12) and (8), we can write the prior joint distribution for Φ as:
where:
For the Bayesian analysis of the second stage, we set an informative prior density for α*, π(α*), which is built from the analysis of the first stage, i.e., using the posterior estimates obtained by means of the density .
We consider that all αj, j = 0, 1, ..., q2 are independent with a prior density defined in the interval [aj, bj], with aj > 0 and bj < 1, such that αj < αj < bj, since the constraint requires that αj ∈ (0, 1), j = 1, 2,..., q2. Because we want to generate values for αj close to the averages of each interval (aj + bj)/2 with greater frequency than values near the limits aj and bj, we adopt the proposed reparameterization in equation (7), such that Ф *j is a component of the vector of parameters and aj and bj is selected based on the upper and lower bounds of the 95% credibility intervals for a, obtained from the posterior density .
In this case, the conditional likelihood function is defined with volatility ht rewritten in terms of the parameters α*j, j = 0, 1,..., q2, transformed similarly in Ф*j, j = 0, 1, ..., q2, by (7), i.e.:
where φ* = . Assuming that Ф*j, j = 0, 1, ..., q2 are normally distributed, i.e., π(Ф*j) ~ Normal, (0, σj2) the a prior joint density is given by:
Combining expressions (14) and (15), we can write the posterior joint density for φ* as:
Thus, the posterior conditional densities for Ф*j,, j = 0, 1 , . . . , q2, obtained by means of expression (16), are given by:
where φ*-j is a vector formed by the model's parameters, except the parameter Ф*j, i.e.,
5 ALGORITHMS TO ESTIMATE THE PARAMETERS OF ARCH(q) MODELS
To obtain representative samples from the posterior joint density (9), (13) and (16), we use MCMC simulation algorithms.
The MCMC simulation method is a form of Monte Carlo integration. The idea is to simulate an irreducible non-periodic Markov chain whose stationary density is the density of interest, i.e., the posterior density. There are two methods to generate Markov chains with specified stationary density, the Metropolis-Hastings (Chib & Greenberg, 1995), which has been used for many years in statistical physics, and Gibbs sampling, which was introduced in the statistical literature by Gelfand & Smith (1990). In general, the use of these algorithms is necessary when the noniterative generation of the density to be sampled is too complicated or costly.
The Metropolis-Hastings algorithm
When posterior conditional densities are not easily identified as having a standard form (normal, gamma, etc.), preventing the direct generation from these densities, the Metropolis-Hastings algorithm can be used. This technique requires a nucleus, i.e., a transition density q (Ф, Ф'), not necessarily having equilibrium probability π, but which represents a passing rule to define a chain. Consider also the probability of acceptance p (Ф(r-1), Ф') set forth below. The algorithm follows these steps:
Step 1: Assign an initial value Ф = Ф(0) and start the iteration count at r = 1. Step 2: Move the chain to a new value Ф' generated from the density q (Ф(r-1), Ф').
Step 3: Generate u from the uniform density (0, 1).
Step 4: Accept the value generated Ф' if
Otherwise keep Ф(r) = Ф(r-1).
Step 5: Update r and go to step 2.
For r sufficiently large, Ф(r) is a sample of the posterior distribution. For the vector case φ = [φ0φ1 ... φq], we have a transition density given by q (φ, φ') and an acceptance probability given by p (φ(r1), φ', and we must do the same.
One feature that should be considered in the use of the Metropolis-Hastings algorithm is the fact that it can generate correlated samples, since if the generated candidate is rejected, the previous value is taken as a candidate. To avoid biased estimates, it is common to disregard the initial part of the chain in order to avoid the influence of initial conditions, and to select values for each k steps to eliminate the sample correlation. The evaluation of this procedure is reflected in the acceptance rate, and the convergence is checked using the criterion of Geweke (1992).
To ensure that the Metropolis-Hastings algorithm will converge to an equilibrium density, one must observe the regularity conditions of the posterior distributions, which are guaranteed in our case.
Assuming a quadratic loss function, the Bayesian estimates for α(Sections 4.1 and 4.2) can be expressed as evaluating the expectancy of a function of interest g (φ) with respect to the posterior density π (α| Z). In general, it is given by:
The exact solution of the integral in (18) cannot be calculated analytically for the models considered in this work. However, if the expression E [g (α)] exists, an approximation to expression (18) can be obtained by Monte Carlo simulation (Geweke, 1989a).
6 MODEL SELECTION CRITERIA
There are several criteria that can be used for the selection of the order q of ARCH models under a Bayesian framework. In this paper, we use the following criteria: Bayesian information criterion (BIC), Akaike information criterion (AIC), deviation information criterion (DIC) and predictive ordinate criterion (POC), which are described more generally below.
6.1 Bayesian information criterion (BIC) and Akaike information criterion (AIC)
The Bayesian information criterion (BIC) is a model selection criterion proposed by Schwarz (1978) and modified by Carlin & Louis (2000) to be applied in the Bayesian context. This criterion is a weighting between the maximum of the posterior expected value of the log-likelihood function and the number of model parameters. The best model is the one that has the lowest BIC value, given by:
where E(l) is the expected value taken with respect to the posterior density of the log-likelihood function, M is the size of the parameter vector and T is the size of the series.
The Akaike information criterion (AIC), proposed by Akaike (1974), was also modified to be used in the Bayesian context. The best model is the one that has the lowest AIC value, given by:
Lower values of BIC and/or AIC indicate better models, but the BIC penalizes the number of parameters more and tends to select models with fewer parameters.
6.2 Deviation information criterion (DIC)
The deviation information criterion (DIC) is a generalization of the BIC. Proposed by Spiegelhalter et al. (2002), this criterion is defined as:
where is the deviation assessed in the posterior mean, ln is the natural logarithm of the likelihood function, C is a constant that is canceled and need not be known when comparing models and MD is the effective number of parameters of the model, given by: , where is the posterior mean deviation, which measures the model's goodness of fit. The model with the lowest DIC value will be considered the most suitable to represent the data.
The DIC is commonly used for Bayesian models whose posterior distribution of parameters was obtained through MCMC simulation algorithms as well as proposed models for analysis of financial series.
6.3 Predictive ordinate criterion (POC)
In the criterion for selecting models based on predictive ordinate density (Carlin & Chib, 1995) we use the distribution of zT+m, subject to the data Z and the parameters α= [α0 α1 ... αq]'. Thus, the predictive density is given by:
where π (α|Z) is the posterior distribution of the parameters α and the integral in (22) is a multiple integral in the parameters space.
Assessing the observations , obtained by MCMC simulation and considering hT+m as a function of the parameters α, the Monte Carlo estimate of the predictive density is given by:
A low value of indicates that the observation zT+m is unlikely for the model in question. Thus, the POC consists of choosing the model l that provides the greatest value of
7 APPLICATION
The analyzed time series consists of daily prices of shares of Telebras (Brazilian telecom holding company controlled by the federal government), traded on the Bovespa in the period from January 2, 1992 to January 5, 1996, for a total of 986 observations.
This was one of the most liquid papers (according to the number of trades and financial volume) in the Brazilian stock market during the study period, i.e. one of the most representative of those making up the Bovespa index (Ibovespa). The behavior of the Telebras stock has been widely studied in the literature and among the works on this issue, we can mention: Baidya & Costa (2001), Oliveira (2005) and Safadi & Andrade (2007).
Let pt be the closing price of Telebras shares on a trading day, in which the return is given by zt = In (pt /pt1). The share prices of Telebras and their returns are illustrated in Figure 1.
The graphs in Figure 2 also show the behavior of the series of squared returns of Telebras, . We can see that this series is correlated. This behavior is typically associated with that of ARCH (q) models.
In implementing the Metropolis-Hastings algorithm, for each model (CBM and EBM), we simulated a chain with 50,000 iterations for each parameter, then discarded 50% of the values to decrease the effect of initial conditions and then took values spaced 5 to 5, for a total sample of 5,000 observations. The convergence of the algorithm was verified by Geweke's criterion (1992), at a significance level of 5% for the test under the null hypothesis Ho. The convergence was considered for values obtained by the diagnosis between -1.96 and 1.96. In this work, all the coputational algoriths ere ipleented using the MATLAB© software.
For CBM, the adjusted model for the Telebras series, according to the AIC, DIC and POC (Table 1), was an ARCH (7), described as:
Table 2 shows the estimates and 95% credibility intervals (CI) for α= [α0α1 ... αq]', using the CBM, obtained in accordance with the procedures proposed for the inference about parameters of ARCH(q) processes, q = 7. The convergence of parameters was checked by Geweke's criterion (GC), which was observed for those. The last column of this table also shows the acceptance rates (AR) for the parameter values generated by the Metropolis-Hastings algorithm.
As previously mentioned, in the EBM we partitioned the data set such that the first half of the data series was used in Bayesian analysis in the first step, according to the procedure presented in Section 4.2, to obtain posterior estimates α= [α0α1 ... αq1]'. Thus, the model fit to the data Telebras series, according to the AIC, BIC and POC (Table 3) was an ARCH (6), given by:
Table 4 shows the estimates and 95% credibility intervals (CI) for α= [α0α1...αq1]', obtained in the first stage of the EBM, according to an ARCH(q1) process, q1 = 6.
This information served to elucidate the informative prior distribution of π*, α(α*), where their hyperparameters aj and bj assumed values of lower and upper bounds, respectively, in the 95% credibility intervals presented in Table 4 (in bold), and , for j = 0, 1, ..., q2.
Thus, the second half of the data series was used in the second stage (main analysis), with π(α*) resulting in posterior summaries of interest. The model fit to the Telebras data series in the main analysis, according to the model selection criteria (Table 5), was an ARCH (5), given by:
Table 6 shows the estimates and 95% credibility intervals (CI) for α* = [α0α1 ... αq2]', using EBM, obtained from the main analysis by an ARCH(q2) process, q2 = 5.
The graphs in Figure 3 represent the histogras constructed fro the saple selected for the parameters α0, α1, ..., α7, i.e., the posterior distribution of the parameters of the ARCH (7) model, which was adjusted to the Telebras series considering a complete Bayesian model.
The graphs in Figure 4 represent the histograms constructed from the sample selected for the parameters α0, α1 , . . . , α5, i.e., the posterior distribution of the parameters of the ARCH (5) model, which was adjusted to the Telebras series considering the main analysis of an empirical Bayesian model.
Figure 5 shows the estimated volatility versus the observed volatility (values of ). When the model is appropriate to the data, i.e., it properly estimates the volatility of the series, we expect the graphic representation of this relationship to show those values grouped in a straight line. In this respect, this behavior can be observed in both cases (complete and empirical models).
As the BIC and other selection criteria similar to it are influenced by sample size, the criterion based on predictive ordinate density is the best for the final comparison between the two Bayesian procedures presented.
Thus, although the plots in Figure 5 show that both approaches presented very close results regarding fit, the graphs about predictive ordinate (and values of POC) in Figure 6 highlight that the EBM best fitted the data of daily Telebras share prices, and it is even more parsimonious than the CBM. This fact suggests that the use of a prior information can lead to a more representative model of that data series.
Finally, Figure 7 illustrates the estimated volatility from the best model, EBM. It can be seen that events were detected generating significant behavioral changes in the series of returns during the period studied, often irregular, of which we can highlight the crisis in Mexico in February and March 1995.
8 CONCLUDING REMARKS
In general, it is important to emphasize the feasibility of the Bayesian approach in the inference of the parameters of ARCH(q) processes, since it allows the possibility of incorporating the experience of experts in finance, which usually is a relevant issue in the analysis of economic and financial series.
In this context, studies show that empirical Bayesian models present similar results to those presented by complete Bayesian models, but the former have the advantage of being easily integrated for various cases.
In this paper, we performed an empirical Bayesian analysis, through a detailed study. However, due to the complexity of this analysis, posterior estimates could only be found numerically and, so we used MCMC simulation algorithms.
The results were compared to those obtained by using the complete Bayesian model and showed that the two methods have adjusted ARCH processes with different orders. We noted that the empirical Bayesian model provides a more parsimonious adjustment to the Telebras series, i.e., with a smaller number of parameters for the ARCH process, compared to the complete Bayesian model. Moreover, the selection criterion of models based on the ordinate predictive density showed that the empirical Bayesian model provided better adjustment than the complete Bayesian model.
It is worth noting that the use of empirical prior distributions and of reparametrization contributed to faster convergence of the process of inference of the parameters of ARCH(q) models using MCMC simulation algorithms.
9 ACKNOWLEDGMENTS
We thank the reviewers for their careful reading and valuable comments, and Fundação para o Desenvolvimento da UNESP - FUNDUNESP for financial support (Process no. 00502/07-DF).
Received March 16, 2009
Accepted September 14, 2011
References
- [1] AKAIKE H. 1974. A new look at the statistical identification model. IEEE Trans on Automatic Control, 19: 716-723.
- [2] ANDRADE MG & OLIVEIRA SC. 2011. A comparative study of Bayesian and Maximum Likelihood approaches for ARCH models with evidence from Brazilian financial series. New Mathematics and Natural Computation, 7: 347-361.
- [3] AUSIN MC & GALEANO P. 2007. Bayesian estimation of the Gaussian mixture GARCH model. Computational Statistics and Data Analysis, 51: 2636-2652.
- [4] BARNDORFF-NIELSEN OE & COX DR. 1994. Inference and Asymptotics. Chapman-Hall, London.
- [5] BARRETO GA, OLIVEIRA SC & ANDRADE MG. 2008. Estimação de Parâmetros de Modelos ARCH(p): Abordagem Bayes-MCMC versus Maxima Verossimilhanca. Revista Brasileira de Estatística, 69: 7-24.
- [6] BERGER JO & PERICCHI LR. 2004. Training samples in objective Bayesian model selection. The Annals of Statistics, 32: 841-869.
- [7] BOLLERSLEV T. 2008. Glossary to ARCH (GARCH). CREATES Research Paper 2008-49. Available at SSRN: <http://ssrn.com/abstract=1263250>
- [8] BOLLERSLEV T. 1986. Generalized Autoregressive Conditional Heteroskedasticity. Journal of Econometrics, 31: 307-327.
- [9] BOLLERSLEV T, Chou RY & Kroner KF. 1992. ARCH Modeling in Finance: A Review of the Theory and Empirical Evidence. Journal of Econometrics, 52: 5-59.
- [10] BOX GE, JENKINS GM & REINSEL G. 1994. Time Series Analysis: Forecasting and Control. Third Edition. Englewood Cliffs: Prentice Hall, San Francisco.
- [11] BOX GE & Tiao GC. 1973. Bayesian Inference in Statistical Analysis. Addison-Wesley, New York.
- [12] CARLIN B & CHIB S. 1995. Bayesian Model Choice via Markov Chain Monte Carlo Methods. Journal of the Royal Statistical Society, 57: 473-184.
- [13] CARLIN B & Louis T. 2000. Bayes and Empirical Bayes Methods for Data Analysis. Texts in Statistical Science Series, Chapman and Hall, London.
- [14] CHIB S & Greenberg E. 1995. Understanding the Metropolis-Hastings Algorithm. The American Statistician, 49: 327-335.
- [15] COSTA PHS & BAIDYA TKN. 2001. Propriedades estatísticas das series de retornos das principais ações brasileiras. Pesquisa Operacional, 21: 61-87.
- [16] DEGIANNAKIS S & XEKALAKI E. 2004. Autoregressive Conditional Heteroscedasticity (ARCH) Models: A review. Quality Technology and Quantitative Management, 1: 271-324.
- [17] ENDERS W. 2009. Applied Econometric Times Series. 3rd edition, John Wiley & Sons, New York.
- [18] ENGLE R. 1982. Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of UK Inflation. Econometrica, 50: 987-1007.
- [19] GELFAND AE & SMITH AF. 1990. Sampling - Based Approaches to Calculating Marginal Densities. Journal ofthe Statistical Association, 5: 398-409.
- [20] GELMAN A, CARLIN J, STERN H & RUBIN D. 1995. Bayesian Data Analysis. London: Chapman & Hall.
- [21] GEWEKE J. 1992. Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (with discussion). In: BERGER J, BERNARDO J, DAWID A & SMITH A. (editors), Bayesian Statistics, 4: 164-193.
- [22] GEWEKE J. 1989a. Bayesian inference in econometric models using Monte Carlo integration. Econometrica, 57: 1317-1339.
- [23] GEWEKE J. 1989b. Exact Predictive Densities for Linear Models with ARCH Disturbances. Journal of Econoetrics, 40: 63-86.
- [24] GEWEKE J. 1986. Exact Inference in the Inequality Constrained Normal Linear Regression Model. Journal of Applied Econometrics. 1: 127-141.
- [25] GIAKOUMATOS S, DELLAPORTAS P & POLITIS D. 2005. Bayesian Analysis of the Unobserved ARCH Model using Auxiliary Variable Sampling. Statistics and Computing, 15: 103-112.
- [26] GILKS WR, RICHARDSON S & SPIEGELHALTER D J. 1996. Markov Chain Monte Carlo in Practice. London: Chapman and Hall.
- [27] KOOP G. 1994. Bayesian Semi-nonparametric ARCH Models. Review of Econometrics and Statistics, 76: 176-181.
- [28] KOTZ S, BALAKRISHNAN N & JOHNSON NL. 2000. Continuous Multivariate Distributions, Models and Applications, 2nd edition, Wiley Series in Probability and Statistics: Applied Probability and Statistics. Wiley-Interscience, 1: 1-722.
- [29] MIGON HS & MAZUCHELI J. 1999. Modelos GARCH Bayesianos: Metodos Aproximados e Aplicações. The Brazilian Review of Econometrics, 19: 111-138.
- [30] MORETTIN PA. 2008. Econometria Financeira. Edgard Blucher Ltda, São Paulo.
- [31] NAKATSUMA T. 2000. Bayesian Analysis of ARMA-GARCH Models: A Markov Chain Sampling Approach. Journal of Econometrics, 95: 57-69.
- [32] NAKATSUMA T & TSURUMI H. 1996. ARMA-GARCH Models: Bayes Estimation versus MLE, and Bayes Non-Stationarity Test. Technical report, Department of Economics, New Brunswick.
- [33] OLIVEIRA SC. 2005. Modelos Estocásticos com Heterocedasticidade para Séries Temporais em Financas. Doctoral Thesis, Universidade de São Paulo, Instituto de Ciências Matemáticas e de Computação.
- [34] PITT M & WALKER S. 2005. Constructing stationary time series models using auxiliary variables with applications. Journal of the American Statistical Association, 100: 554-564.
- [35] POLASEK W. 2001. MCMC Methods for Periodic AR-ARCH Models. Technical report, University of Basel.
- [36] POLASEK W & KOZUMI H. 2000. The VAR-VARCH Model: A Bayesian Approach. Technical report, University of Basel.
- [37] SAFADI T & ANDRADE MG. 2007. Abordagem Bayesiana de Modelos de Séries Temporais. Minicurso, Universidade Federal do Rio Grande do Sul, 12Ş Escola de Séries Temporais e Econometria.
- [38] SCHWARZ G. 1978. Estimating the Dimension of a Model. The Annals of Statistics, 6: 461-164.
- [39] SPIEGELHALTER D, BEST N, CARLIN B & VAN DER LINDE A. 2002. Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society, 64: 583-639.
Publication Dates
-
Publication in this collection
05 July 2012 -
Date of issue
Aug 2012
History
-
Received
16 Mar 2009 -
Accepted
14 Sept 2011