7. ENSEMBLING DISCOUNTED VAW EXPERTS WITH THE VAW META-LEARNER


Author:  DMITRY B.ROKHLIN AND GEORGIY A.KARAPETYANTS                                                DOWNLOAD                                                                                                                                                                                                                                                                                      



How to Cite:




Rokhlin, Dmitry B., and Georgiy A. Karapetyants. "Ensembling Discounted VAW Experts with the VAW

Meta-Learner." Global and Stochastic Analysis, vol. 12, no. 5, 2025, pp. 45–58.




Abstract




The Vovk-Azoury-Warmuth (VAW) forecaster is a powerful algorithm for online regression, but its standard form is

designed for 
stationary environments. Recently Jacobsen and Cutkosky (2024) introduced a discounting factor, γ, to

the VAW algorithm (DVAW), enabling it to track changing concepts by down-weighting old data. They also

proposed an ensemble method for learning γ on-the-fly. In this paper we use a simplified dynamic regret bound and

employ the standard VAW forecaster as a meta-learner to dynamically aggregate the predictions of DVAW experts.

The main result contains a bound for the dynamic regret of the proposed ensemble. Computer experiments on

synthetic data show that our ensembling approach significantly outperforms both the standard VAW and individual

DVAW experts in non-stationary settings, while remaining robust and competitive in stationary ones. 



Keywords


Key words and phrases. Vovk-Azoury-Warmuth algorithm; discounting; dynamic regret.