High-dimensional autocovariance matrices and optimal linear prediction
AuthorMcMurry, T. L.
Politis, Dimitris Nicolas
SourceElectronic Journal of Statistics
Google Scholar check
MetadataShow full item record
A new methodology for optimal linear prediction of a stationary time series is introduced. Given a sample X1,…,Xn, the optimal linear predictor of Xn+1 is Xn+1 = Φ1(n)Xn + Φ2(n)Xn−1 + + Φn(n)X1. In practice, the coefficient vector Φ(n) Φ (Φ1(n), Φ2(n),…, Φn(n))′ is routinely truncated to its first p components in order to be consistently estimated. By contrast, we employ a consistent estimator of the n × n autocovariance matrix Γn in order to construct a consistent estimator of the optimal, full-length coefficient vector Φ(n). Asymptotic convergence of the proposed predictor to the oracle is established, and finite sample simulations are provided to support the applicability of the new method. As a by-product, new insights are gained on the subject of estimating Γn via a positive definite matrix, and four ways to impose positivity are introduced and compared. The closely related problem of spectral density estimation is also addressed. © 2015, Institute of Mathematical Statistics. All right received.