Bayesian inference for stable processes

Date of Completion

January 1996






The problem of Bayesian inference for univariate and multivariate stable processes is of considerable recent interest in modeling and forecasting independent correlated data. Although several different methods exist in literature for estimating stable law parameters, Bayesian inference for stable processes is virtually unexplored. Except for Buckle's (8) approach for the univariate stable law case, there is no discussion of the Bayesian approach for inference in univariate or multivariate stable processes. By incorporating prior information and performing posterior analysis, the Bayesian approach facilitates simultaneous estimation of the parameters characterizing the stable law, together with the parameters of the correlated process, and enables us to obtain the joint and marginal posterior distributions of all the parameters as well as summary features of these distributions.^ We present Bayesian inference for, (i) univariate stable laws, (ii) univariate autoregressive moving average (ARMA) models with stable innovations, (iii) multivariate stable laws and (iv) vector autoregressive moving average (VARMA) models with stable innovations. In (i) we approximate the posterior density and moments using normal mixtures (West (1993)) and the Laplace approximation (Tierney (1989)) respectively, which are alternatives to Buckle's approach. In (ii), we extend Buckle's approach to time series ARMA models which we compare with the approach using normal mixtures. In (iii), we develop a sampling based Markov chain Monte Carlo approach for multivariate stable laws while in (iv), we extend this approach to vector time series models. In each case, we prove the propriety of the posterior distributions under a non-informative prior specification. We also explicitly incorporate, where necessary, the stationarity and invertibility restrictions on the time series model parameters. To enable sampling from complete conditional distributions that have non-standard forms, we use a Metropolis-Hastings algorithm which, after convergence, gives samples from the required joint posterior. We illustrate our approach through two real data examples for univariate cases and simulated data for multivariate cases. ^