Conformal Prediction forecasting with MAPIE
MAPIE (Model Agnostic Prediction Interval Estimator) is a Scikit-learn Python library based on Conformal Prediction. With Conformal Prediction, anyone can build high-grade probabilistic prediction models that guarantee unbiased predictions regardless of the data distribution for any sample size and forecasting model.
MAPIE recently became the first open-source library to add conformal prediction functionality for time series.
In this article, we will look at how this model called EnbPI (‘Ensemble batch prediction intervals’ delivers high-quality probabilistic forecasts using friendly Scikit-learn compatible functionality implemented by MAPIE.
We will use the ‘UCI electricity dataset’ from the article ‘Benchmarking Neural Prophet. Part II — exploring electricity dataset.’ The data is energy consumption data by households in Portugal.
We build a machine learning forecasting model using the features from the dataset, such as calendar features and lags, to create a powerfully optimized point regressor using Random Forest.
Whilst one can create highly performant machine models for point forecasting, the predominant majority of forecasting models (whether machine learning or statistical / econometrics) are either unable to create Prediction Intervals or produce prediction intervals that have little to do with declared confidence of predictions.
The prediction intervals produced by uncalibrated forecasting models will invariably be too broad or most likely too narrow, leading to incorrect forecasts and decisions such as at what level to set safety inventories.
The resulting mistakes are usually costly for businesses, in demand planning scenarios costing large companies tens of millions of dollars in either lost sales or excess/obsolete inventories.
For energy companies, not producing correct probabilistic forecasts would mean paying significant penalties for under/over declaring production volumes for the energy grid and so on.
Probabilistic forecasts require specific metrics. You can read about this topic in my article ‘How to evaluate Probabilistic Forecasts’.
We first create train and test datasets to make Probabilistic Predictions for six months ahead and evaluate them against actual values.
The steps for creating probabilistic forecasting with MAPIE are relatively straightforward:
- Load the data and prepare the dataset
- Create a point regressor model. In our case, we use Random Forest, a robust, highly performant model that is relatively easy to build in Scikit-learn. One has to note that Conformal Prediction can create calibrated Prediction Intervals for any forecasting model. We happen to use the RF model here to demonstrate the main principles, but any model out of the Scikit-learn tool kit can be used in MAPIE.
- Optimize the base estimator (Random Forest).
- Estimate Prediction Intervals on the test set using MAPIE implementation of EnbPI.
That’s literally it, the only technicality is to know that MAPIE allows not one but two ways to estimate prediction intervals:
- with a regular
.fit
and.predict
process, limiting the use of training set residuals to build prediction intervals - using
.partial_fit
in addition to.fit
and.predict
allowing MAPIE to use new residuals from the test points as new data becomes available.
The second option is more interesting for high volatility situations or sudden drops/jumps in time series. In our case, we will use the baseline EnbPI approach where uncertainty is estimated using training set residuals.
Having built our model, we now look at predictions, first at the point predictions. The red is actual values; the green is point predictions. We see that the relatively straightforward Random Forest model that took me 5 minutes to build has captured the underlying dynamics of the energy consumption pretty well. We also see that Random Forest produced smooth forecasts; hence having well-calibrated prediction intervals for such forecasts is even more critical to capture the volatility of the data and resulting predictions.
What we are interested in how good are out probabilistic predictions are. Can they be used to set safety inventory levels reliably? What about setting energy production forecasts for the electricity grid?
TL;DR And it turns out they are rather good probabilistic forecasts! Keep reading to see why and how. We are almost there.
We now look at only the timeline corresponding to the test dataset (the last six months of data) and plot forecasted (red) vs actual energy consumption values (in blue).
We also plot 95% Prediction Intervals (lower limit — green, upper limit — purple).
It turns out that most of the actual points lie well within this 95% Prediction Interval, with only five or so points barely out.
For 180 points * 5% = 9 points is the expected tolerance, whilst our model produced only five points outside the Prediciton Intervals — this is pretty good.
In conclusion, Conformal Prediction can produce highly performant probabilistic forecasts, and with EnbPI implementation in MAPIE, this has become a rather straightforward task.
#timeseries #uncertainty #forecasting #demandforecasting #machinelearning #conformalprediction #MAPIE
Additional materials:
- Awesome Conformal Prediction. The most comprehensive professionally curated list of Awesome Conformal Prediction tutorials, videos, books, papers and open-source libraries in Python and R.
https://github.com/valeman/awesome-conformal-prediction
2. Conformal prediction interval for dynamic time-series
3. Model Agnostic Prediction Interval Estimator
[1] Chen Xu and Yao Xie. “Conformal Prediction Interval for Dynamic Time-Series.” International Conference on Machine Learning (ICML, 2021).