Sort by year:

International Journal of Forecasting Volume 21, Issue 3, July–September 2005, Pages 577–594

Nowcasting concerns the inference on the current realization of random variables using information available until a recent past. This paper proposes a modelling strategy aimed at the best use of data for nowcasting based on panel data with severe deficiencies, namely, short time series and many missing data. The basic idea consists of introducing a clustering approach into the usual panel data model specification. A case study in the field of R&D variables illustrates the proposed modelling strategy.

Journal of Applied Econometrics Volume 21, Issue 1, pages 79–109, January/February 2006

This paper surveys the most important developments in multivariate ARCH-type modelling. It reviews the model specifications and inference methods, and identifies likely directions of future research.

Journal of Statistical Computation and Simulation Volume 77, Issue 8, 2007

Econometric Theory / Volume / Issue 02 / April 2007, pp 251-280

We consider a model for a multivariate time series where the conditional covariance matrix is a function of a finite-dimensional parameter and the innovation distribution is nonparametric. The semiparametric lower bound for the estimation of the euclidean parameter is characterized, and it is shown that adaptive estimation without reparametrization is not possible. Based on a consistent first-stage estimator (such as quasi maximum likelihood), we propose a semiparametric estimator that estimates the efficient influence function using kernel estimators. We state conditions under which the estimator attains the semiparametric lower bound. For particular models such as the constant conditional correlation model, adaptive estimation of the dynamic part of the model is shown to be possible. To avoid the curse of dimensionality one can, e.g., restrict the multivariate density to the class of spherical distributions, for which we also derive the semiparametric efficiency bound and an estimator that attains this bound. A simulation experiment demonstrates the efficiency gain of the proposed estimator compared with quasi maximum likelihood estimation.

Econometric Reviews Volume 26, Issue 2-4, 2007

Computational Statistics & Data Analysis Volume 51, Issue 7, 1 April 2007, Pages 3551–3566

A new multivariate volatility model where the conditional distribution of a vector time series is given by a mixture of multivariate normal distributions is proposed. Each of these distributions is allowed to have a time-varying covariance matrix. The process can be globally covariance stationary even though some components are not covariance stationary. Some theoretical properties of the model such as the unconditional covariance matrix and autocorrelations of squared returns are derived. The complexity of the model requires a powerful estimation algorithm. A simulation study compares estimation by maximum likelihood with the EM algorithm. Finally, the model is applied to daily US stock returns.

The Econometrics Journal Volume 10, Issue 2, pages 408–425, July 2007

We estimate by Bayesian inference the mixed conditional heteroskedasticity model of Haas *et al.* (2004a *Journal of Financial Econometrics 2*, 211–50). We construct a Gibbs sampler algorithm to compute posterior and predictive densities. The number of mixture components is selected by the marginal likelihood criterion. We apply the model to the SP500 daily returns.

Computational Statistics & Data Analysis Volume 53, Issue 6, 15 April 2009, Pages 2040–2054

The estimation of density functions for positive multivariate data is discussed. The proposed approach is semiparametric. The estimator combines gamma kernels or local linear kernels, also called boundary kernels, for the estimation of the marginal densities with parametric copulas to model the dependence. This semiparametric approach is robust both to the well-known boundary bias problem and the curse of dimensionality problem. Mean integrated squared error properties, including the rate of convergence, the uniform strong consistency and the asymptotic normality are derived. A simulation study investigates the finite sample performance of the estimator. The proposed estimator performs very well, also for data without boundary bias problems. For bandwidths choice in practice, the univariate least squares cross validation method for the bandwidth of the marginal density estimators is investigated. Applications in the field of finance are provided.

Journal of Nonparametric Statistics Volume 20, Issue 7, 2008

Quantitative Finance Volume 9, Issue 6, 2009

Studies in Nonlinear Dynamics & Econometrics. Volume 13, Issue 3, ISSN (Online) 1558-3708,

To match the stylized facts of high frequency financial time series precisely and parsimoniously, this paper presents a finite mixture of conditional exponential power distributions where each component exhibits asymmetric conditional heteroskedasticity. We provide weak stationarity conditions and unconditional moments to the fourth order. We apply this new class to Dow Jones index returns. We find that a two-component mixed exponential power distribution dominates mixed normal distributions with more components, and more parameters, both in-sample and out-of-sample. In contrast to mixed normal distributions, all the conditional variance processes become stationary. This happens because the mixed exponential power distribution allows for component-specific shape parameters so that it can better capture the tail behaviour. Therefore, the more general new class has attractive features over mixed normal distributions in our application: less components are necessary and the conditional variances in the components are stationary processes. Results on NASDAQ index returns are similar.

Computational Statistics & Data Analysis Volume 54, Issue 2, 1 February 2010, Pages 245–261

Computational Statistics & Data Analysis Volume 53, Issue 6, 15 April 2009, Pages 2040–2054

Journal of Statistical Planning and Inference Volume 140, Issue 1, 1 January 2010, Pages 139–152

We propose a new nonparametric estimator for the density function of multivariate bounded data. As frequently observed in practice, the variables may be partially bounded (e.g. nonnegative) or completely bounded (e.g. in the unit interval). In addition, the variables may have a point mass. We reduce the conditions on the underlying density to a minimum by proposing a nonparametric approach. By using a gamma, a beta, or a local linear kernel (also called boundary kernels), in a product kernel, the suggested estimator becomes simple in implementation and robust to the well known boundary bias problem. We investigate the mean integrated squared error properties, including the rate of convergence, uniform strong consistency and asymptotic normality. We establish consistency of the least squares cross-validation method to select optimal bandwidth parameters. A detailed simulation study investigates the performance of the estimators. Applications using lottery and corporate finance data are provided.

The Econometrics Journal Volume 13, Issue 2, pages 218–244, July 2010

We develop a Markov-switching GARCH model (MS-GARCH) wherein the conditional mean and variance switch in time from one GARCH process to another. The switching is governed by a hidden Markov chain. We provide sufficient conditions for geometric ergodicity and existence of moments of the process. Because of path dependence, maximum likelihood estimation is not feasible. By enlarging the parameter space to include the state variables, Bayesian estimation using a Gibbs sampling algorithm is feasible. We illustrate the model on S&P500 daily returns.

Journal of Banking & Finance Volume 35, Issue 9, September 2011, Pages 2267–2281

In this paper we consider option pricing using multivariate models for asset returns. Specifically, we demonstrate the existence of an equivalent martingale measure, we characterize the risk neutral dynamics, and we provide a feasible way for pricing options in this framework. Our application confirms the importance of allowing for dynamic correlation, and it shows that accommodating correlation risk and modeling non-Gaussian features with multivariate mixtures of normals substantially changes the estimated option prices.

Computational Statistics & Data Analysis Volume 56, Issue 11, November 2012, Pages 3415–3429

Change-point models are useful for modeling time series subject to structural breaks. For interpretation and forecasting, it is essential to estimate correctly the number of change points in this class of models. In Bayesian inference, the number of change points is typically chosen by the marginal likelihood criterion, computed by Chib’s method. This method requires one to select a value in the parameter space at which the computation is performed. Bayesian inference for a change-point dynamic regression model and the computation of its marginal likelihood are explained. Motivated by results from three empirical illustrations, a simulation study shows that Chib’s method is robust with respect to the choice of the parameter value used in the computations, among posterior mean, mode and quartiles. However, taking into account the precision of the marginal likelihood estimator, the overall recommendation is to use the posterior mode or median. Furthermore, the performance of the Bayesian information criterion, which is based on maximum likelihood estimates, in selecting the correct model is comparable to that of the marginal likelihood.

Journal of Business & Economic Statistics Volume 30, Issue 2, 2012

Journal of Applied Econometrics, Volume 27, Issue 6, pages 934–955, September/October 2012

This paper addresses the question of the selection of multivariate generalized autoregressive conditional heteroskedastic (GARCH) models in terms of variance matrix forecasting accuracy, with a particular focus on relatively large-scale problems. We consider 10 assets from the New York Stock Exchange and compare 125 models based 1-, 5- and 20-day-ahead conditional variance forecasts over a period of 10 years using the model confidence set (MCS) and the superior predictive ability (SPA) tests. Model performance is evaluated using four statistical loss functions which account for different types and degrees of asymmetry with respect to over-/under-predictions. When considering the full sample, MCS results are strongly driven by short periods of high market instability during which multivariate GARCH models appear to be inaccurate. Over relatively unstable periods, i.e. the dot-com bubble, the set of superior models is composed of sophisticated specifications such as orthogonal and dynamic conditional correlation (DCC), both with leverage effect in the conditional variances. However, unlike the DCC models, our results show that the orthogonal specifications tend to underestimate the conditional variance. Over calm periods, a simple assumption like constant conditional correlation and symmetry in the conditional variances cannot be rejected. Finally, during the 2007–2008 financial crisis, accounting for non-stationarity in the conditional variance process generates superior forecasts. The SPA test suggests that, independently from the period, the best models do not provide significantly better forecasts than the DCC model of Engle

Journal of Econometrics Volume 173, Issue 1, March 2013, Pages 1–10

The ranking of multivariate volatility models is inherently problematic because when the unobservable volatility is substituted by a proxy, the ordering implied by a loss function may be biased with respect to the intended one. We point out that the size of the distortion is strictly tied to the level of the accuracy of the volatility proxy. We propose a generalized necessary and sufficient functional form for a class of non-metric distance measures of the Bregman type which ensure consistency of the ordering when the target is observed with noise. An application to three foreign exchange rates is provided.

International Journal of Forecasting Volume 30, Issue 1, January–March 2014, Pages 78–98

We assess the predictive accuracies of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set of 444 multivariate models that differ in their specification of the conditional variance, conditional correlation, innovation distribution, and estimation approach. All of the models belong to the dynamic conditional correlation class, which is particularly suitable because it allows consistent estimations of the risk neutral dynamics with a manageable amount of computational effort for relatively large scale problems. It turns out that increasing the sophistication in the marginal variance processes (i.e., nonlinearity, asymmetry and component structure) leads to important gains in pricing accuracy. Enriching the model with more complex existing correlation specifications does not improve the performance significantly. Estimating the standard dynamic conditional correlation model by composite likelihood, in order to take into account potential biases in the parameter estimates, generates only slightly better results. To enhance this poor performance of correlation models, we propose a new model that allows for correlation spillovers without too many parameters. This model performs about 60% better than the existing correlation models we consider. Relaxing a Gaussian innovation for a Laplace innovation assumption improves the pricing in a more minor way. In addition to investigating the value of model sophistication in terms of dollar losses directly, we also use the model confidence set approach to statistically infer the set of models that delivers the best pricing performances.

Computational Statistics & Data Analysis Volume 76, August 2014, Pages 588–605

Option pricing using mixed normal heteroscedasticity models is considered. It is explained how to perform inference and price options in a Bayesian framework. The approach allows to easily compute risk neutral predictive price densities which take into account parameter uncertainty. In an application to the S&P 500 index, classical and Bayesian inference is performed on the mixture model using the available return data. Comparing the ML estimates and posterior moments small differences are found. When pricing a rich sample of options on the index, both methods yield similar pricing errors measured in dollar and implied standard deviation losses, and it turns out that the impact of parameter uncertainty is minor. Therefore, when it comes to option pricing where large amounts of data are available, the choice of the inference method is unimportant. The results are robust to different specifications of the variance dynamics but show however that there might be scope for using Bayesian methods when considerably less data is available for inference.

Journal of Econometrics. Volume (Year): 178 (2014) Issue (Month): P3 () Pages: 508-522

GARCH volatility models with fixed parameters are too restrictive for long time series due to breaks in the volatility process. Flexible alternatives are Markov-switching GARCH and change-point GARCH models. They require estimation by MCMC methods due to the path dependence problem. An unsolved issue is the computation of their marginal likelihood, which is essential for determining the number of regimes or change-points. We solve the problem by using particle MCMC, a technique proposed by Andrieu et al. (2010). We examine the performance of this new method on simulated data, and we illustrate its use on several return series.

forthcoming in the Journal of Applied Econometrics

This paper compares the forecasting performance of different models which have been proposed for forecasting in the presence of structural breaks. These models differ in their treatment of the break process, the parameters defining the model which applies in each regime and the out-of-sample probability of a break occurring. In an extensive empirical evaluation involving many important macroeconomic time series, we demonstrate the presence of structural breaks and their importance for forecasting in the vast majority of cases. However, we find no single forecasting model consistently works best in the presence of structural breaks. In many cases, the formal modeling of the break process is important in achieving good forecast performance. However, there are also many cases where simple, rolling OLS forecasts perform well.

Forthcoming in the Journal of Econometrics

We consider a new nonparametric estimator of the stationary density of the logarithm of the volatility of the GARCH( 1,1) model. This problem is particularly challenging since this density is still unknown, even in cases where the model parameters are given. Although the volatility variables are only observed with multiplicative independent innovation errors with unknown density, we manage to construct a nonparametric procedure which estimates the log volatility density consistently. By carefully exploiting the specific GARCH dependence structure of the data, our iterative procedure even attains the striking parametric root- T convergence rate. As a by-product of our main results, we also derive new smoothness properties of the stationary density. Using numerical simulations, we illustrate the performance of our estimator, and we provide an application to financial data.

Handbook of Computational Statistics pp 1061-1094

Since the last decade we live in a digitalized world where many actions in human and economic life are monitored. This produces a continuous stream of new, rich and high quality data in the form of panels, repeated cross-sections and long time series. These data resources are available to many researchers at a low cost. This new era is fascinating for econometricians who can address many open economic questions. To do so, new models are developed that call for elaborate estimation techniques. Fast personal computers play an integral part in making it possible to deal with this increased complexity.