Abstract |
: |
In time series analysis, most of the models are based on the assumption of covariance stationarity. However, many time series in the applied sciences show a time-varying second-order structure. That is, variance and covariance, or equivalently the spectral structure, are likely to change over time. Examples may be found in a growing number of fields, such as biomedical time series analysis, geophysics, telecommunications, or financial data analysis, to name but a few. In this thesis, we are concerned with the modelling of such nonstationary time series, and with the subsequent questions of how to estimate their second-order structure and how to forecast these processes. We focus on univariate, discrete-time processes with zero-mean arising, for example, when the global trend has been removed from the data. The first chapter presents a simple model for nonstationarity, where only the variance is time-varying. This model follows the approach of "local stationarity" introduced by [1]. We show that our model satisfactorily explains the nonstationary behaviour of several economic data sets, among which are the U.S. stock returns and exchange rates. This chapter is based on [5]. In the second chapter, we study more complex models, where not only the variance is evolutionary. A typical example of these models is given by time-varying ARMA(p,q) processes, which are ARMA(p,q) with time-varying coefficients. Our aim is to fit such semiparametric models to some nonstationary data. Our data-driven estimator is constructed from a minimisation of a penalised contrast function, where the contrast function is an approximation to the Gaussian likelihood of the model. The theoretical performance of the estimator is analysed via non asymptotic risk bounds for the quadratic risk. In our results, we do not assume that the observed data follow the semiparamatric structure, that is our results hold in the misspecified case. The third chapter introduces a fully nonparametric model for local nonstationarity. This model is a wavelet-based model of local stationarity which enlarges the class of models defined by Nason et al. [3]. A notion of time-varying "wavelet spectrum' is uniquely defined as a wavelet-type transform of the autocovariance function with respect to so-called "autocorrelation wavelets'. This leads to a natural representation of the autocovariance which is localised on scales. One particularly interesting subcase arises when this representation is sparse, meaning that the nonstationary autocovariance may be decomposed in the autocorrelation wavelet basis using few coefficients. We present a new test of sparsity for the wavelet spectrum in Chapter 4. It is based on a non-asymptotic result on the deviations of a functional of a periodogram. In this chapter, we also present another application of this result given by the pointwise adaptive estimation of the wavelet spectrum. Chapters 3 and 4 are based on [6] Computational aspects of the test of sparsity and of the pointwise adaptive estimator are considered in Chapter 5. We give a description of a full algorithm, and an application in biostatistics. In this chapter, we also derive a new test of covariance stationarity, applied to another case study in biostatistics. This chapter is based on [7].
Finally, Chapter 6 address the problem how to forecast the general nonstationary process introduced in Chapter 3. We present a new predictor and derive the prediction equations as a generalisation of the Yule-Walker equations. We propose an automatic computational procedure for choosing the parameters of the forecasting algorithm. Then we apply the prediction algorithm to a meteorological data set. [...] |