Filter by type:

Sort by year:

Clustered panel data models: an efficient approach for nowcasting from poor data

journal paper
Michel Mouchart, Jeroen V.K. Romboutsb
International Journal of Forecasting Volume 21, Issue 3, July–September 2005, Pages 577–594

Nowcasting concerns the inference on the current realization of random variables using information available until a recent past. This paper proposes a modelling strategy aimed at the best use of data for nowcasting based on panel data with severe deficiencies, namely, short time series and many missing data. The basic idea consists of introducing a clustering approach into the usual panel data model specification. A case study in the field of R&D variables illustrates the proposed modelling strategy.

Multivariate GARCH models: a survey

journal paper
Luc Bauwens, Sébastien Laurent, Jeroen V. K. Rombouts
Journal of Applied Econometrics Volume 21, Issue 1, pages 79–109, January/February 2006

This paper surveys the most important developments in multivariate ARCH-type modelling. It reviews the model specifications and inference methods, and identifies likely directions of future research.

Estimation of temporally aggregated multivariate GARCH models

journal paper
Christian M. Hafnera, Jeroen V. K. Romboutsb
Journal of Statistical Computation and Simulation Volume 77, Issue 8, 2007

This paper investigates the performance of quasi maximum likelihood (QML) and non-linear least squares (NLS) estimation applied to temporally aggregated GARCH models. As these are known to be only weak GARCH, the conditional variance of the aggregated process is in general not known. Thus, one major condition, often used in proving the consistency of QML, the correct specification of the first two moments, is absent. Indeed, our results suggest that QML is not consistent, with a substantial bias if both the initial degree of persistence and the aggregation level are high. In other cases, QML might be taken as an approximation with only a small bias. On the basis of the results for univariate GARCH models, NLS is likely to be consistent, although inefficient, for weak GARCH models. Our simulation study reveals that NLS does not reduce the bias of QML in considerably large samples. As the variation of NLS estimates is much higher than that of QML, one would obviously prefer QML in most practical situations. An empirical example illustrates some of the results.

Semiparametric Multivariate Volatility Models

journal paper
Christian M. Hafner and Jeroen V.K. Rombouts
Econometric Theory / Volume / Issue 02 / April 2007, pp 251-280

We consider a model for a multivariate time series where the conditional covariance matrix is a function of a finite-dimensional parameter and the innovation distribution is nonparametric. The semiparametric lower bound for the estimation of the euclidean parameter is characterized, and it is shown that adaptive estimation without reparametrization is not possible. Based on a consistent first-stage estimator (such as quasi maximum likelihood), we propose a semiparametric estimator that estimates the efficient influence function using kernel estimators. We state conditions under which the estimator attains the semiparametric lower bound. For particular models such as the constant conditional correlation model, adaptive estimation of the dynamic part of the model is shown to be possible. To avoid the curse of dimensionality one can, e.g., restrict the multivariate density to the class of spherical distributions, for which we also derive the semiparametric efficiency bound and an estimator that attains this bound. A simulation experiment demonstrates the efficiency gain of the proposed estimator compared with quasi maximum likelihood estimation.

Bayesian Clustering of Many Garch Models

journal paper
L. Bauwensa & J. V. K. Romboutsb*
Econometric Reviews Volume 26, Issue 2-4, 2007

We consider the estimation of a large number of GARCH models, of the order of several hundreds. Our interest lies in the identification of common structures in the volatility dynamics of the univariate time series. To do so, we classify the series in an unknown number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster contains therefore similar series. We do not know a priori which series belongs to which cluster. The model is a finite mixture of distributions, where the component weights are unknown parameters and each component distribution has its own conditional mean and variance. Inference is done by the Bayesian approach, using data augmentation techniques. Simulations and an illustration using data on U.S. stocks are provided.

Mixed Normal Multivariate Conditional Heteroskedasticity

journal paper
L. Bauwens, C.M. Hafner, J.V.K. Rombouts
Computational Statistics & Data Analysis Volume 51, Issue 7, 1 April 2007, Pages 3551–3566

A new multivariate volatility model where the conditional distribution of a vector time series is given by a mixture of multivariate normal distributions is proposed. Each of these distributions is allowed to have a time-varying covariance matrix. The process can be globally covariance stationary even though some components are not covariance stationary. Some theoretical properties of the model such as the unconditional covariance matrix and autocorrelations of squared returns are derived. The complexity of the model requires a powerful estimation algorithm. A simulation study compares estimation by maximum likelihood with the EM algorithm. Finally, the model is applied to daily US stock returns.

Bayesian Inference for the Mixed Conditional Heteroskedasticity Model

journal paper
Bayesian Inference for the Mixed Conditional Heteroskedasticity Model
The Econometrics Journal Volume 10, Issue 2, pages 408–425, July 2007

We estimate by Bayesian inference the mixed conditional heteroskedasticity model of Haas et al. (2004a Journal of Financial Econometrics 2, 211–50). We construct a Gibbs sampler algorithm to compute posterior and predictive densities. The number of mixture components is selected by the marginal likelihood criterion. We apply the model to the SP500 daily returns.

Semiparametric Multivariate Density Estimation for Positive Data Using Copulas

journal paper
T. Bouezmarni, J.V.K. Rombouts
Computational Statistics & Data Analysis Volume 53, Issue 6, 15 April 2009, Pages 2040–2054

The estimation of density functions for positive multivariate data is discussed. The proposed approach is semiparametric. The estimator combines gamma kernels or local linear kernels, also called boundary kernels, for the estimation of the marginal densities with parametric copulas to model the dependence. This semiparametric approach is robust both to the well-known boundary bias problem and the curse of dimensionality problem. Mean integrated squared error properties, including the rate of convergence, the uniform strong consistency and the asymptotic normality are derived. A simulation study investigates the finite sample performance of the estimator. The proposed estimator performs very well, also for data without boundary bias problems. For bandwidths choice in practice, the univariate least squares cross validation method for the bandwidth of the marginal density estimators is investigated. Applications in the field of finance are provided.

Density and hazard rate estimation for censored and α-mixing data using gamma kernels

journal paper
Taoufik Bouezmarni & Jeroen V.K. Romboutsb
Journal of Nonparametric Statistics Volume 20, Issue 7, 2008

In this paper, we consider the non-parametric estimation for a density and hazard rate function for right censored α-mixing survival time data using kernel smoothing techniques. As survival times are positive with potentially high concentration at zero, one has to take into account the bias problems when the functions are estimated in the boundary region. In this paper, gamma kernel estimators of the density and the hazard rate function are proposed. The estimators use adaptive weights depending on the point in which we estimate the function, and they are robust to the boundary bias problem. For both estimators, the mean-squared error properties, including the rate of convergence, the almost sure consistency, and the asymptotic normality, are investigated. The results of a simulation study demonstrate the performance of the proposed estimators.

Evaluating Portfolio Value-at-Risk using Semi-parametric GARCH Models

journal paper
Jeroen V.K. Rombouts & Marno Verbeek
Quantitative Finance Volume 9, Issue 6, 2009

In this paper we examine the usefulness of multivariate semi-parametric GARCH models for evaluating the Value-at-Risk (VaR) of a portfolio with arbitrary weights. We specify and estimate several alternative multivariate GARCH models for daily returns on the S&P 500 and Nasdaq indexes. Examining the within-sample VaRs of a set of given portfolios shows that the semi-parametric model performs uniformly well, while parametric models in several cases have unacceptable failure rates. Interestingly, distributional assumptions appear to have a much larger impact on the performance of the VaR estimates than the particular parametric specification chosen for the GARCH equations.

Mixed Exponential Power Asymmetric Conditional Heteroskedasticity

journal paper
Jeroen V. K. Rombouts, Mohammed Bouaddi
Studies in Nonlinear Dynamics & Econometrics. Volume 13, Issue 3, ISSN (Online) 1558-3708,

To match the stylized facts of high frequency financial time series precisely and parsimoniously, this paper presents a finite mixture of conditional exponential power distributions where each component exhibits asymmetric conditional heteroskedasticity. We provide weak stationarity conditions and unconditional moments to the fourth order. We apply this new class to Dow Jones index returns. We find that a two-component mixed exponential power distribution dominates mixed normal distributions with more components, and more parameters, both in-sample and out-of-sample. In contrast to mixed normal distributions, all the conditional variance processes become stationary. This happens because the mixed exponential power distribution allows for component-specific shape parameters so that it can better capture the tail behaviour. Therefore, the more general new class has attractive features over mixed normal distributions in our application: less components are necessary and the conditional variances in the components are stationary processes. Results on NASDAQ index returns are similar.

Nonparametric density estimation for positive time series

journal paper
T. Bouezmarni, J.V.K. Rombouts
Computational Statistics & Data Analysis Volume 54, Issue 2, 1 February 2010, Pages 245–261

The estimation of density functions for positive multivariate data is discussed. The proposed approach is semiparametric. The estimator combines gamma kernels or local linear kernels, also called boundary kernels, for the estimation of the marginal densities with parametric copulas to model the dependence. This semiparametric approach is robust both to the well-known boundary bias problem and the curse of dimensionality problem. Mean integrated squared error properties, including the rate of convergence, the uniform strong consistency and the asymptotic normality are derived. A simulation study investigates the finite sample performance of the estimator. The proposed estimator performs very well, also for data without boundary bias problems. For bandwidths choice in practice, the univariate least squares cross validation method for the bandwidth of the marginal density estimators is investigated. Applications in the field of finance are provided.

Asymptotic properties of the Bernstein density copula estimator for α-mixing data

journal paper
Taoufik Bouezmarni, Jeroen V.K. Romboutsc, Abderrahim Taamouti
Computational Statistics & Data Analysis Volume 53, Issue 6, 15 April 2009, Pages 2040–2054

The estimation of density functions for positive multivariate data is discussed. The proposed approach is semiparametric. The estimator combines gamma kernels or local linear kernels, also called boundary kernels, for the estimation of the marginal densities with parametric copulas to model the dependence. This semiparametric approach is robust both to the well-known boundary bias problem and the curse of dimensionality problem. Mean integrated squared error properties, including the rate of convergence, the uniform strong consistency and the asymptotic normality are derived. A simulation study investigates the finite sample performance of the estimator. The proposed estimator performs very well, also for data without boundary bias problems. For bandwidths choice in practice, the univariate least squares cross validation method for the bandwidth of the marginal density estimators is investigated. Applications in the field of finance are provided.

Nonparametric Density Estimation for Multivariate Bounded Data

journal paper
Taoufik Bouezmarni, Jeroen V.K. Rombouts
Journal of Statistical Planning and Inference Volume 140, Issue 1, 1 January 2010, Pages 139–152

We propose a new nonparametric estimator for the density function of multivariate bounded data. As frequently observed in practice, the variables may be partially bounded (e.g. nonnegative) or completely bounded (e.g. in the unit interval). In addition, the variables may have a point mass. We reduce the conditions on the underlying density to a minimum by proposing a nonparametric approach. By using a gamma, a beta, or a local linear kernel (also called boundary kernels), in a product kernel, the suggested estimator becomes simple in implementation and robust to the well known boundary bias problem. We investigate the mean integrated squared error properties, including the rate of convergence, uniform strong consistency and asymptotic normality. We establish consistency of the least squares cross-validation method to select optimal bandwidth parameters. A detailed simulation study investigates the performance of the estimators. Applications using lottery and corporate finance data are provided.

Theory and Inference for a Markov Switching GARCH Model

journal paper
Luc Bauwens, Arie Preminger and Jeroen V. K. Rombouts
The Econometrics Journal Volume 13, Issue 2, pages 218–244, July 2010

We develop a Markov-switching GARCH model (MS-GARCH) wherein the conditional mean and variance switch in time from one GARCH process to another. The switching is governed by a hidden Markov chain. We provide sufficient conditions for geometric ergodicity and existence of moments of the process. Because of path dependence, maximum likelihood estimation is not feasible. By enlarging the parameter space to include the state variables, Bayesian estimation using a Gibbs sampling algorithm is feasible. We illustrate the model on S&P500 daily returns.

Multivariate Option Pricing with Time Varying Volatility and Correlations

journal paper
Jeroen V.K. Rombouts , Lars Stentoft
Journal of Banking & Finance Volume 35, Issue 9, September 2011, Pages 2267–2281

In this paper we consider option pricing using multivariate models for asset returns. Specifically, we demonstrate the existence of an equivalent martingale measure, we characterize the risk neutral dynamics, and we provide a feasible way for pricing options in this framework. Our application confirms the importance of allowing for dynamic correlation, and it shows that accommodating correlation risk and modeling non-Gaussian features with multivariate mixtures of normals substantially changes the estimated option prices.

On Marginal Likelihood Computation in Change-Point Models

journal paper
Luc Bauwens, Jeroen V.K. Rombouts
Computational Statistics & Data Analysis Volume 56, Issue 11, November 2012, Pages 3415–3429

Change-point models are useful for modeling time series subject to structural breaks. For interpretation and forecasting, it is essential to estimate correctly the number of change points in this class of models. In Bayesian inference, the number of change points is typically chosen by the marginal likelihood criterion, computed by Chib’s method. This method requires one to select a value in the parameter space at which the computation is performed. Bayesian inference for a change-point dynamic regression model and the computation of its marginal likelihood are explained. Motivated by results from three empirical illustrations, a simulation study shows that Chib’s method is robust with respect to the choice of the parameter value used in the computations, among posterior mean, mode and quartiles. However, taking into account the precision of the marginal likelihood estimator, the overall recommendation is to use the posterior mode or median. Furthermore, the performance of the Bayesian information criterion, which is based on maximum likelihood estimates, in selecting the correct model is comparable to that of the marginal likelihood.

A Nonparametric Copula Based Test for Conditional Independence with Applications to Granger Causality

journal paper
Taoufik Bouezmarni, Jeroen V.K. Rombouts & Abderrahim Taamouti
Journal of Business & Economic Statistics Volume 30, Issue 2, 2012

This article proposes a new nonparametric test for conditional independence that can directly be applied to test for Granger causality. Based on the comparison of copula densities, the test is easy to implement because it does not involve a weighting function in the test statistic, and it can be applied in general settings since there is no restriction on the dimension of the time series data. In fact, to apply the test, only a bandwidth is needed for the nonparametric copula. We prove that the test statistic is asymptotically pivotal under the null hypothesis, establishes local power properties, and motivates the validity of the bootstrap technique that we use in finite sample settings. A simulation study illustrates the size and power properties of the test. We illustrate the practical relevance of our test by considering two empirical applications where we examine the Granger noncausality between financial variables. In a first application and contrary to the general findings in the literature, we provide evidence on two alternative mechanisms of nonlinear interaction between returns and volatilities: nonlinear leverage and volatility feedback effects. This can help better understand the well known asymmetric volatility phenomenon. In a second application, we investigate the Granger causality between stock index returns and trading volume. We find convincing evidence of linear and nonlinear feedback effects from stock returns to volume, but a weak evidence of nonlinear feedback effect from volume to stock returns.

On the Forecasting Accuracy of Multivariate GARCH Models

journal paper
Sébastien Laurent, Jeroen V. K. Rombouts andFrancesco Violante
Journal of Applied Econometrics, Volume 27, Issue 6, pages 934–955, September/October 2012

This paper addresses the question of the selection of multivariate generalized autoregressive conditional heteroskedastic (GARCH) models in terms of variance matrix forecasting accuracy, with a particular focus on relatively large-scale problems. We consider 10 assets from the New York Stock Exchange and compare 125 models based 1-, 5- and 20-day-ahead conditional variance forecasts over a period of 10 years using the model confidence set (MCS) and the superior predictive ability (SPA) tests. Model performance is evaluated using four statistical loss functions which account for different types and degrees of asymmetry with respect to over-/under-predictions. When considering the full sample, MCS results are strongly driven by short periods of high market instability during which multivariate GARCH models appear to be inaccurate. Over relatively unstable periods, i.e. the dot-com bubble, the set of superior models is composed of sophisticated specifications such as orthogonal and dynamic conditional correlation (DCC), both with leverage effect in the conditional variances. However, unlike the DCC models, our results show that the orthogonal specifications tend to underestimate the conditional variance. Over calm periods, a simple assumption like constant conditional correlation and symmetry in the conditional variances cannot be rejected. Finally, during the 2007–2008 financial crisis, accounting for non-stationarity in the conditional variance process generates superior forecasts. The SPA test suggests that, independently from the period, the best models do not provide significantly better forecasts than the DCC model of Engle

On Loss Functions and Ranking Forecasting Performances of Multivariate Volatility Models

journal paper
Sébastien Laurent, Jeroen V.K. Rombouts, Francesco Violante
Journal of Econometrics Volume 173, Issue 1, March 2013, Pages 1–10

The ranking of multivariate volatility models is inherently problematic because when the unobservable volatility is substituted by a proxy, the ordering implied by a loss function may be biased with respect to the intended one. We point out that the size of the distortion is strictly tied to the level of the accuracy of the volatility proxy. We propose a generalized necessary and sufficient functional form for a class of non-metric distance measures of the Bregman type which ensure consistency of the ordering when the target is observed with noise. An application to three foreign exchange rates is provided.

The Value of Multivariate Model Sophistication: An Application to pricing Dow Jones Industrial Average Options

journal paper
Jeroen Rombouts, Lars Stentoft, Franceso Violante
International Journal of Forecasting Volume 30, Issue 1, January–March 2014, Pages 78–98

We assess the predictive accuracies of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set of 444 multivariate models that differ in their specification of the conditional variance, conditional correlation, innovation distribution, and estimation approach. All of the models belong to the dynamic conditional correlation class, which is particularly suitable because it allows consistent estimations of the risk neutral dynamics with a manageable amount of computational effort for relatively large scale problems. It turns out that increasing the sophistication in the marginal variance processes (i.e., nonlinearity, asymmetry and component structure) leads to important gains in pricing accuracy. Enriching the model with more complex existing correlation specifications does not improve the performance significantly. Estimating the standard dynamic conditional correlation model by composite likelihood, in order to take into account potential biases in the parameter estimates, generates only slightly better results. To enhance this poor performance of correlation models, we propose a new model that allows for correlation spillovers without too many parameters. This model performs about 60% better than the existing correlation models we consider. Relaxing a Gaussian innovation for a Laplace innovation assumption improves the pricing in a more minor way. In addition to investigating the value of model sophistication in terms of dollar losses directly, we also use the model confidence set approach to statistically infer the set of models that delivers the best pricing performances.

Bayesian Option Pricing Using Mixed Normal Heteroskedasticity Models

journal paper
Jeroen V.K. Rombouts, Lars Stentoft
Computational Statistics & Data Analysis Volume 76, August 2014, Pages 588–605

Option pricing using mixed normal heteroscedasticity models is considered. It is explained how to perform inference and price options in a Bayesian framework. The approach allows to easily compute risk neutral predictive price densities which take into account parameter uncertainty. In an application to the S&P 500 index, classical and Bayesian inference is performed on the mixture model using the available return data. Comparing the ML estimates and posterior moments small differences are found. When pricing a rich sample of options on the index, both methods yield similar pricing errors measured in dollar and implied standard deviation losses, and it turns out that the impact of parameter uncertainty is minor. Therefore, when it comes to option pricing where large amounts of data are available, the choice of the inference method is unimportant. The results are robust to different specifications of the variance dynamics but show however that there might be scope for using Bayesian methods when considerably less data is available for inference.

Marginal Likelihood Computation for Markov Switching and Change-point GARCH Models

journal paper
Bauwens Luc, Dufays Arnaud, Rombouts Jeroen V.K.
Journal of Econometrics. Volume (Year): 178 (2014) Issue (Month): P3 () Pages: 508-522

GARCH volatility models with fixed parameters are too restrictive for long time series due to breaks in the volatility process. Flexible alternatives are Markov-switching GARCH and change-point GARCH models. They require estimation by MCMC methods due to the path dependence problem. An unsolved issue is the computation of their marginal likelihood, which is essential for determining the number of regimes or change-points. We solve the problem by using particle MCMC, a technique proposed by Andrieu et al. (2010). We examine the performance of this new method on simulated data, and we illustrate its use on several return series.

A Comparison of Forecasting Procedures for Macroeconomic Series: The Contribution of Structural Break Models

journal paper
Bauwens, L., Koop, G., Korobilis, D., Rombouts, J.V.K.
forthcoming in the Journal of Applied Econometrics

This paper compares the forecasting performance of different models which have been proposed for forecasting in the presence of structural breaks. These models differ in their treatment of the break process, the parameters defining the model which applies in each regime and the out-of-sample probability of a break occurring. In an extensive empirical evaluation involving many important macroeconomic time series, we demonstrate the presence of structural breaks and their importance for forecasting in the vast majority of cases. However, we find no single forecasting model consistently works best in the presence of structural breaks. In many cases, the formal modeling of the break process is important in achieving good forecast performance. However, there are also many cases where simple, rolling OLS forecasts perform well.

Root-T consistent density estimation in GARCH models

journal paper
Aurore Delaigle, Alexander Meister an Jeroen Rombouts
Forthcoming in the Journal of Econometrics

We consider a new nonparametric estimator of the stationary density of the logarithm of the volatility of the GARCH( 1,1) model. This problem is particularly challenging since this density is still unknown, even in cases where the model parameters are given. Although the volatility variables are only observed with multiplicative independent innovation errors with unknown density, we manage to construct a nonparametric procedure which estimates the log volatility density consistently. By carefully exploiting the specific GARCH dependence structure of the data, our iterative procedure even attains the striking parametric root- T convergence rate. As a by-product of our main results, we also derive new smoothness properties of the stationary density. Using numerical simulations, we illustrate the performance of our estimator, and we provide an application to financial data.

Econometrics, Handbook of Computational Statistics

book chapter
Luc Bauwens, Jeroen V. K. Rombouts
Handbook of Computational Statistics pp 1061-1094

Since the last decade we live in a digitalized world where many actions in human and economic life are monitored. This produces a continuous stream of new, rich and high quality data in the form of panels, repeated cross-sections and long time series. These data resources are available to many researchers at a low cost. This new era is fascinating for econometricians who can address many open economic questions. To do so, new models are developed that call for elaborate estimation techniques. Fast personal computers play an integral part in making it possible to deal with this increased complexity.