Probabilistic Forecasting with Conformal Prediction and NeuralProphet.
In my previous articles, we have looked at NeuralProphet and evaluated whether it delivers performance improvements vis-a-vis Facebook Prophet (TLDR; yes) and whether it generally delivers good point forecasting performance (TLDR; yes).
NeuralProphet is a much better model for point forecasting as it has addressed the principal issue of its predecessor Facebook Prophet — the inability of Facebook Prophet to model local dependencies using autoregressive (AR) terms. NeuralProphet effectively leverages the ability to model local patterns, which is essential for forecasting the near-term future.
The NeuralProphet core development team has recently released an exciting package upgrade. NeuralProphet now includes state-of-the-art innovation in probabilistic forecasting, including probabilistic time-series forecasting using Conformal Prediction.
The NeuralPropher core development team has released this statement.
“Our [NeuralProphet core devs] goal is to accelerate the R&D of conformal prediction into the time-series space across any domain using NeuralProphet as the rapid experimentation and prototyping tool.Our current implementation of conformal prediction is only the beginning of our journey into conformal prediction. We invite our friends at the Conformal Prediction community to collaborate and co-develop with us to make the latest conformal prediction techniques more accessible to users in the time-series settings.”
If you are unfamiliar with Conformal Prediction, Awesome Conformal Prediction can help you to get started in this very popular and fast-growing field very quickly. Conformal Prediction is the best uncertainty quantification framework for the XXIst century that allows uncertainty estimation and converts any statistical, machine or deep learning model into a probabilistic prediction model.
Whilst Conformal Prediction has been originally expressly designed for IID data, the recent research and innovation in Conformal Prediction extended it into time series and forecasting. For the first time in the history of time series and forecasting, Conformal Prediction unlocked robust probabilistic forecasting that can produce Prediction Intervals with specified prediction coverage at any selected confidence level.
Before Conformal Prediction was extended to time series, most forecasting models could not produce correct Prediction Intervals. Most of the existing forecasting models either cannot produce Prediction Intervals or produce Prediction Intervals that are misspecified in terms of required coverage matching the specified confidence levels.
Research has shown that many forecasting models (including statistical, machine and deep learning models of every kind) fail to estimate uncertainty properly.
In the paper by Professor of the Darden Business School Yael Grushka-Cockayne, “The M4 Forecasting Competition Prediction Intervals”, researchers have shown that M4 forecasting competition submissions failed to estimate the uncertainty correctly with Prediction Intervals generally not covering test sets with a specified probability. The lack of correct coverage by forecasting models results in incorrect decision-making, such as under/overstocking on inventory, resulting in missed sales, damaged customer loyalty and expensive write-offs of slow-moving and obsolete inventory.
Conformal Prediction solves this problem, and in one of my previous articles, “Conformal Prediction forecasting with MAPIE”, I showcased how Conformal Prediciton can produce reliable Prediction Intervals using a model called EnbPI (‘Ensemble batch prediction intervals’ ) that delivers high-quality probabilistic forecasts using friendly Scikit-learn compatible functionality implemented by MAPIE.
The NeuralProphet core development team has also been working on implementing Conformal Prediction, which has been recently integrated into NeuralProphet. The recent release of NeuralProphet includes several options for probabilistic forecasting using Conformal Prediction, including implementing one of the most popular models Conformalized Quantile Regression (CQR). CQR adjusts predictive intervals dynamically accordingly to account for local uncertainty in predictions.
CQR works each time and every time, and it does so by default due to in-built mathematical validity guarantees conferred by using any model in Conformal Prediction. When CQR (or any other Conformal Prediction method in general) produce a 95% prediction interval, one can be certain it is 95% by default (no Ifs and no But’s like in other methods), regardless of the data distribution, the sample size and the underlying regressor whether it is statistical, machine learning or deep learning model.
In this article, we follow on from my article ‘Benchmarking NeuralProphet. Part II — Exploring the electricity dataset’ where we produced point forecasts for electricity time series using both linear and non-linear (AR-Net) functionality of NeuralProphet.
Instead of training NeuralProphet to produce point forecasts, we now train it to produce model predicting quantiles instead.
# specify quantiles
quantile_lo, quantile_hi = 0.05, 0.95
quantiles = [quantile_lo, quantile_hi]
n_lags = 7
m_quantile_regression = NeuralProphet(
growth='off',
yearly_seasonality=False,
weekly_seasonality=False,
daily_seasonality=False,
n_lags=n_lags,
learning_rate=0.01,
quantiles = quantiles
)
%%time
random_seed = 0
# Concatenate train_df and val_df as full training set
set_random_seed(random_seed)
metrics_quantile = m_quantile_regression.fit(pd.concat([train_df, val_df]),
freq="D")
Forecasting on the test set, we now obtain a probabilistic forecast using classical quantile regression.
Are we done then yet? Not quite — remember, classical quantile regression does not produce calibrated Prediction Intervals. Just because we specified 5% and 95% quantiles as a wish list does not mean that 90% of test points will be within this Prediction Interval.
This is where the magic of Conformal Prediction comes in — we can correct Prediction Intervals produced by any model, including by the classical quantile regression that we have used so far to train NeuralProphet.
We now complete the last step — “conformalize” our probabilistic forecasting model.
All we need to do is to call .conformal_predict() method on our quantile regression model, we use a calibration set that was set aside. The objective of this step is to correct Prediction Intervals created by quantile regression so that they can deliver declared coverage in line with the selected 90% confidence level.
method = "naive"
alpha = 0.1
plotting_backend = "matplotlib" # "plotly", None
naive_forecast_conformal_test_df = m_quantile_regression.conformal_predict
(test_df,calibration_df=cal_df,alpha=alpha,method=method,plotting_backend=plotting_backend,)
As part of the results, NeuralProphet helpfully produces a plot showing how the width of the selected interval (y_hat-q1, y_hat+q1) varies for selected confidence. The higher the confidence we want, the wider the Prediction Interval needs to be to ensure correct coverage in terms of specified confidence — 90% confidence means we want 90% of points on the test dataset unseen by the NeuralProphet within the Prediction Interval. We see that for the selected 90% level, the width of PI is just over ~2,300, which is what we saw on the forecast plot above.
We can now plot predictions created using the ‘naive’ method (the naive method computes residuals on the calibration set and then uses confidence level to select the appropriate quantile to correct prediction intervals produced by quantile regression.
As we can see, the original Prediction Intervals produced by quantile regression were overly optimistic and naive Conformal Prediction method has corrected them.
We then use the more powerful CQR method.
method = "cqr"alpha = 0.1plotting_backend = "matplotlib" # "plotly", None
cqr_forecast_conformal_test_df = m_quantile_regression.conformal_predict(
test_df,
calibration_df=cal_df,
alpha=alpha,
method=method,
plotting_backend=plotting_backend,
)
CQR has used the data more efficiently and formed better corrections of the quantile regression forecasts delivering both the correct coverage and efficient prediction intervals.
References:
- “Benchmarking Neural Prophet. Part I — Neural Prophet vs Facebook Prophet.”
- “Benchmarking Neural Prophet. Part II — exploring electricity dataset”
- “The M4 Forecasting Competition Prediction Intervals”
- How to predict quantiles in a more intelligent way (or ‘Bye-bye quantile regression, hello Conformal Quantile Regression).
- https://pypi.org/project/neuralprophet/
- Conformal Prediction using Energy Hospital Load tutorial