2014 |
Bauwens, Luc; Koop, G.; Korobilis, D.; Rombouts, Jeroen V. K. A Comparison of Forecasting Procedures for Macroeconomic Series: The Contribution of Structural Break Models Journal Article In: Forthcoming in the Journal of Applied Econometrics, 2014. Abstract | Links | BibTeX | Tags: Forecasting @article{JR-Journal5s, This paper compares the forecasting performance of different models which have been proposed for forecasting in the presence of structural breaks. These models differ in their treatment of the break process, the parameters defining the model which applies in each regime and the out-of-sample probability of a break occurring. In an extensive empirical evaluation involving many important macroeconomic time series, we demonstrate the presence of structural breaks and their importance for forecasting in the vast majority of cases. However, we find no single forecasting model consistently works best in the presence of structural breaks. In many cases, the formal modeling of the break process is important in achieving good forecast performance. However, there are also many cases where simple, rolling OLS forecasts perform well. |
Rombouts, Jeroen V. K.; Stentoft, Lars; Violante, Francesco The Value of Multivariate Model Sophistication: An Application to pricing Dow Jones Industrial Average Options Journal Article In: International Journal of Forecasting , January–March 2014, Pages , vol. Volume 30, Issue 1, pp. 78–98, 2014. Abstract | Links | BibTeX | Tags: Forecasting @article{JR-Journal5p, We assess the predictive accuracies of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set of 444 multivariate models that differ in their specification of the conditional variance, conditional correlation, innovation distribution, and estimation approach. All of the models belong to the dynamic conditional correlation class, which is particularly suitable because it allows consistent estimations of the risk neutral dynamics with a manageable amount of computational effort for relatively large scale problems. It turns out that increasing the sophistication in the marginal variance processes (i.e., nonlinearity, asymmetry and component structure) leads to important gains in pricing accuracy. Enriching the model with more complex existing correlation specifications does not improve the performance significantly. Estimating the standard dynamic conditional correlation model by composite likelihood, in order to take into account potential biases in the parameter estimates, generates only slightly better results. To enhance this poor performance of correlation models, we propose a new model that allows for correlation spillovers without too many parameters. This model performs about 60% better than the existing correlation models we consider. Relaxing a Gaussian innovation for a Laplace innovation assumption improves the pricing in a more minor way. In addition to investigating the value of model sophistication in terms of dollar losses directly, we also use the model confidence set approach to statistically infer the set of models that delivers the best pricing performances. |
2013 |
Laurent, Sébastien; Rombouts, Jeroen V. K.; Violante, Francesco On Loss Functions and Ranking Forecasting Performances of Multivariate Volatility Models Journal Article In: Journal of Econometrics, vol. Volume 173, Issue 1, pp. 1–10, 2013. Abstract | Links | BibTeX | Tags: Forecasting @article{JR-Journal5o, The ranking of multivariate volatility models is inherently problematic because when the unobservable volatility is substituted by a proxy, the ordering implied by a loss function may be biased with respect to the intended one. We point out that the size of the distortion is strictly tied to the level of the accuracy of the volatility proxy. We propose a generalized necessary and sufficient functional form for a class of non-metric distance measures of the Bregman type which ensure consistency of the ordering when the target is observed with noise. An application to three foreign exchange rates is provided. |
2012 |
Bauwens, Luc; Rombouts, Jeroen V. K. On Marginal Likelihood Computation in Change-Point Models Journal Article In: Computational Statistics & Data Analysis, vol. Volume 56, Issue 11, pp. 3415–3429, 2012. Abstract | Links | BibTeX | Tags: Forecasting @article{JR-Journal5l, Change-point models are useful for modeling time series subject to structural breaks. For interpretation and forecasting, it is essential to estimate correctly the number of change points in this class of models. In Bayesian inference, the number of change points is typically chosen by the marginal likelihood criterion, computed by Chib’s method. This method requires one to select a value in the parameter space at which the computation is performed. Bayesian inference for a change-point dynamic regression model and the computation of its marginal likelihood are explained. Motivated by results from three empirical illustrations, a simulation study shows that Chib’s method is robust with respect to the choice of the parameter value used in the computations, among posterior mean, mode and quartiles. However, taking into account the precision of the marginal likelihood estimator, the overall recommendation is to use the posterior mode or median. Furthermore, the performance of the Bayesian information criterion, which is based on maximum likelihood estimates, in selecting the correct model is comparable to that of the marginal likelihood. |
Laurent, Sébastien; Rombouts, Jeroen V. K.; Violante, Francesco On the Forecasting Accuracy of Multivariate GARCH Models Journal Article In: Journal of Applied Econometrics, vol. Volume 27, Issue 6, pp. 934–955, 2012. Abstract | Links | BibTeX | Tags: Forecasting , Garch @article{JR-Journal5n, This paper addresses the question of the selection of multivariate generalized autoregressive conditional heteroskedastic (GARCH) models in terms of variance matrix forecasting accuracy, with a particular focus on relatively large-scale problems. We consider 10 assets from the New York Stock Exchange and compare 125 models based 1-, 5- and 20-day-ahead conditional variance forecasts over a period of 10 years using the model confidence set (MCS) and the superior predictive ability (SPA) tests. Model performance is evaluated using four statistical loss functions which account for different types and degrees of asymmetry with respect to over-/under-predictions. When considering the full sample, MCS results are strongly driven by short periods of high market instability during which multivariate GARCH models appear to be inaccurate. Over relatively unstable periods, i.e. the dot-com bubble, the set of superior models is composed of sophisticated specifications such as orthogonal and dynamic conditional correlation (DCC), both with leverage effect in the conditional variances. However, unlike the DCC models, our results show that the orthogonal specifications tend to underestimate the conditional variance. Over calm periods, a simple assumption like constant conditional correlation and symmetry in the conditional variances cannot be rejected. Finally, during the 2007–2008 financial crisis, accounting for non-stationarity in the conditional variance process generates superior forecasts. The SPA test suggests that, independently from the period, the best models do not provide significantly better forecasts than the DCC model of Engle. |
2011 |
Bouezmarni, Taoufik; Rombouts, Jeroen V. K.; Taamouti, Abderrahim A Nonparametric Copula Based Test for Conditional Independence with Applications to Granger Causality Journal Article In: Journal of Business & Economic Statistics, vol. Volume 30, Issue 2, 2011. Abstract | Links | BibTeX | Tags: Forecasting @article{JR-Journal5m, This article proposes a new nonparametric test for conditional independence that can directly be applied to test for Granger causality. Based on the comparison of copula densities, the test is easy to implement because it does not involve a weighting function in the test statistic, and it can be applied in general settings since there is no restriction on the dimension of the time series data. In fact, to apply the test, only a bandwidth is needed for the nonparametric copula. We prove that the test statistic is asymptotically pivotal under the null hypothesis, establishes local power properties, and motivates the validity of the bootstrap technique that we use in finite sample settings. A simulation study illustrates the size and power properties of the test. We illustrate the practical relevance of our test by considering two empirical applications where we examine the Granger noncausality between financial variables. In a first application and contrary to the general findings in the literature, we provide evidence on two alternative mechanisms of nonlinear interaction between returns and volatilities: nonlinear leverage and volatility feedback effects. This can help better understand the well known asymmetric volatility phenomenon. In a second application, we investigate the Granger causality between stock index returns and trading volume. We find convincing evidence of linear and nonlinear feedback effects from stock returns to volume, but a weak evidence of nonlinear feedback effect from volume to stock returns. |
2010 |
Bouezmarni, Taoufik; Rombouts, Jeroen V. K. Nonparametric density estimation for positive time series Journal Article In: Computational Statistics & Data Analysis, vol. Volume 54, Issue 2, pp. Pages 245–261, 2010. Abstract | Links | BibTeX | Tags: Forecasting @article{JR-Journal5g, The estimation of density functions for positive multivariate data is discussed. The proposed approach is semiparametric. The estimator combines gamma kernels or local linear kernels, also called boundary kernels, for the estimation of the marginal densities with parametric copulas to model the dependence. This semiparametric approach is robust both to the well-known boundary bias problem and the curse of dimensionality problem. Mean integrated squared error properties, including the rate of convergence, the uniform strong consistency and the asymptotic normality are derived. A simulation study investigates the finite sample performance of the estimator. The proposed estimator performs very well, also for data without boundary bias problems. For bandwidths choice in practice, the univariate least squares cross validation method for the bandwidth of the marginal density estimators is investigated. Applications in the field of finance are provided. |
2009 |
Bouezmarni, Taoufik; Rombouts, Jeroen V. K. Semiparametric Multivariate Density Estimation for Positive Data Using Copulas Journal Article In: Computational Statistics & Data Analysis , vol. Volume 53, Issue 6, pp. Pages 2040–2054, 2009. Abstract | Links | BibTeX | Tags: Forecasting @article{JR-Journal6, The estimation of density functions for positive multivariate data is discussed. The proposed approach is semiparametric. The estimator combines gamma kernels or local linear kernels, also called boundary kernels, for the estimation of the marginal densities with parametric copulas to model the dependence. This semiparametric approach is robust both to the well-known boundary bias problem and the curse of dimensionality problem. Mean integrated squared error properties, including the rate of convergence, the uniform strong consistency and the asymptotic normality are derived. A simulation study investigates the finite sample performance of the estimator. The proposed estimator performs very well, also for data without boundary bias problems. For bandwidths choice in practice, the univariate least squares cross validation method for the bandwidth of the marginal density estimators is investigated. Applications in the field of finance are provided. |
Bouezmarni, Taoufik; Rombouts, Jeroen V. K.; Taamouti, Abderrahim Asymptotic properties of the Bernstein density copula estimator for α-mixing data Journal Article In: Computational Statistics & Data Analysis, vol. Volume 53, Issue 6, pp. 2040–2054, 2009. Abstract | Links | BibTeX | Tags: Forecasting @article{JR-Journal5h, The estimation of density functions for positive multivariate data is discussed. The proposed approach is semiparametric. The estimator combines gamma kernels or local linear kernels, also called boundary kernels, for the estimation of the marginal densities with parametric copulas to model the dependence. This semiparametric approach is robust both to the well-known boundary bias problem and the curse of dimensionality problem. Mean integrated squared error properties, including the rate of convergence, the uniform strong consistency and the asymptotic normality are derived. A simulation study investigates the finite sample performance of the estimator. The proposed estimator performs very well, also for data without boundary bias problems. For bandwidths choice in practice, the univariate least squares cross validation method for the bandwidth of the marginal density estimators is investigated. Applications in the field of finance are provided. |
2008 |
Bouezmarni, Taoufik; Rombouts, Jeroen V. K. Density and hazard rate estimation for censored and α-mixing data using gamma kernels Journal Article In: Journal of Nonparametric Statistics , vol. Volume 20, Issue 7, 2008. Abstract | Links | BibTeX | Tags: Forecasting @article{JR-Journal5d, In this paper, we consider the non-parametric estimation for a density and hazard rate function for right censored α-mixing survival time data using kernel smoothing techniques. As survival times are positive with potentially high concentration at zero, one has to take into account the bias problems when the functions are estimated in the boundary region. In this paper, gamma kernel estimators of the density and the hazard rate function are proposed. The estimators use adaptive weights depending on the point in which we estimate the function, and they are robust to the boundary bias problem. For both estimators, the mean-squared error properties, including the rate of convergence, the almost sure consistency, and the asymptotic normality, are investigated. The results of a simulation study demonstrate the performance of the proposed estimators. |
2007 |
Hafner, Christian M.; Rombouts, Jeroen V. K. Semiparametric Multivariate Volatility Models Journal Article In: Econometric Theory, vol. Volume, Issue 02, pp. Pages 251-280, 2007. Abstract | Links | BibTeX | Tags: Forecasting @article{JR-Journal4, We consider a model for a multivariate time series where the conditional covariance matrix is a function of a finite-dimensional parameter and the innovation distribution is nonparametric. The semiparametric lower bound for the estimation of the euclidean parameter is characterized, and it is shown that adaptive estimation without reparametrization is not possible. Based on a consistent first-stage estimator (such as quasi maximum likelihood), we propose a semiparametric estimator that estimates the efficient influence function using kernel estimators. We state conditions under which the estimator attains the semiparametric lower bound. For particular models such as the constant conditional correlation model, adaptive estimation of the dynamic part of the model is shown to be possible. To avoid the curse of dimensionality one can, e.g., restrict the multivariate density to the class of spherical distributions, for which we also derive the semiparametric efficiency bound and an estimator that attains this bound. A simulation experiment demonstrates the efficiency gain of the proposed estimator compared with quasi maximum likelihood estimation. |
2005 |
Mouchart, Michel; Rombouts, Jeroen V. K. Clustered panel data models: an efficient approach for nowcasting from poor data Journal Article In: International Journal of Forecasting, vol. 21, Issue 3, pp. Pages 577–594, 2005. Abstract | Links | BibTeX | Tags: Data , Forecasting @article{JR-Journal2, Nowcasting concerns the inference on the current realization of random variables using information available until a recent past. This paper proposes a modelling strategy aimed at the best use of data for nowcasting based on panel data with severe deficiencies, namely, short time series and many missing data. The basic idea consists of introducing a clustering approach into the usual panel data model specification. A case study in the field of R&D variables illustrates the proposed modelling strategy. |