What Truly Works in Time Series Forecasting — The Results from Nixtla’s Mega Study
Time series is a captivating domain where the quest for a crystal ball never ceases.
Uncovering the best forecasting techniques has always been a pursuit in the field. While many regard understanding effective methods as the Holy Grail, it’s equally vital to identify those that fall short — take Facebook Prophet as a case in point.
For four decades, the mantra in applied forecasting has been ‘simpler methods prevail,’ influenced largely by the results of the M series forecasting competitions.
These competitions saw minimal machine learning participation, even from their organizers. Despite the M4 forecasting competition’s (2018) top two solutions being machine learning-based, the organizers remained stubbornly anti-machine learning, suggesting it’s ‘still up in the air’ whether machine learning surpasses traditional techniques like exponential smoothing in time series forecasting.
A few years after the conclusion of the M4 competition, the organizers shifted the subsequent M5 competition to Kaggle. For the first time, this move introduced the ‘M-competitions’ to machine learning. The outcome? A resounding upheaval of the longstanding academic forecasting beliefs, with undeniable proof — every top solution relied on machine learning — highlighting machine learning as the future of time-series forecasting.
Roll forward to 2023, with Nixtla publishing the results of the first mega study based on a dataset containing 100 billion time series points.
Nixtla’s study and results are proprietary and can not be reproduced, so we will take them as they are.
What new insights does the study bring? We will use Monash Time Series Forecasting Repository which has so far been the best publicly available reproducible study https://forecastingdata.org/
Monash Time Series Forecasting Repository Insights: While the repository ranks ETS and TBATS as top methods for monthly data, Nixtla’s research added a fresh perspective by…