欢快的电池 · 裝置資訊屬性 - UWP ...· 2 月前 · |
霸气的毛衣 · 湖南省2023年下半年中小学教师资格考试面试 ...· 4 月前 · |
痴情的人字拖 · ModuleFoundError: No ...· 4 月前 · |
愉快的双杠 · 院内招聘会——宁德时代新能源科技股份有限公司 ...· 5 月前 · |
魁梧的葫芦 · 바카라추천🍒🍒KMM7️⃣7️⃣。𝐂 𝐎 ...· 6 月前 · |
Step-by-step guide on using the
AutoARIMA Model
with
Statsforecast
.
An autoARIMA is a time series model that uses an automatic process to select the optimal ARIMA (Autoregressive Integrated Moving Average) model parameters for a given time series. ARIMA is a widely used statistical model for modeling and predicting time series.
The process of automatic parameter selection in an autoARIMA model is performed using statistical and optimization techniques, such as the Akaike Information Criterion (AIC) and cross-validation, to identify optimal values for autoregression, integration, and moving average parameters. of the ARIMA model.
Automatic parameter selection is useful because it can be difficult to determine the optimal parameters of an ARIMA model for a given time series without a thorough understanding of the underlying stochastic process that generates the time series. The autoARIMA model automates the parameter selection process and can provide a fast and effective solution for time series modeling and forecasting.
The
statsforecast.models
library brings the
AutoARIMA
function from Python provides an implementation of autoARIMA that allows
to automatically select the optimal parameters for an ARIMA model given
a time series.
An Arima model (autoregressive integrated moving average) process is the combination of an autoregressive process AR(p), integration I(d), and the moving average process MA(q).
Just like the ARMA process, the ARIMA process states that the present value is dependent on past values, coming from the AR(p) portion, and past errors, coming from the MA(q) portion. However, instead of using the original series, denoted as yt, the ARIMA process uses the differenced series, denoted as y t ′ . Note that y t ′ can represent a series that has been differenced more than once.
Therefore, the mathematical expression of the ARIMA(p,d,q) process states that the present value of the differenced series y t ′ is equal to the sum of a constant C , past values of the differenced series ϕ p y t − p ′ , the mean of the differenced series μ , past error terms θ q ε t − q , and a current error term ε t , as shown in equation
where y t ′ is the differenced series (it may have been differenced more than once). The “predictors” on the right hand side include both lagged values of y t and lagged errors. We call this an ARIMA( p,d,q) model, where
p | order of the autoregressive part |
d | degree of first differencing involved |
q | order of the moving average part |
The same stationarity and invertibility conditions that are used for autoregressive and moving average models also apply to an ARIMA model.
Many of the models we have already discussed are special cases of the ARIMA model, as shown in Table
Model | p d q | Differenced | Method |
---|---|---|---|
Arima(0,0,0) | 0 0 0 | y t = Y t | White noise |
ARIMA (0,1,0) | 0 1 0 | y t = Y t − Y t − 1 | Random walk |
ARIMA (0,2,0) | 0 2 0 | y t = Y t − 2 Y t − 1 + Y t − 2 | Constant |
ARIMA (1,0,0) | 1 0 0 | Y ^ t = μ + Φ 1 Y t − 1 + ϵ | AR(1): AR(1): First-order regression model |
ARIMA (2, 0, 0) | 2 0 0 | Y ^ t = Φ 0 + Φ 1 Y t − 1 + Φ 2 Y t − 2 + ϵ | AR(2): Second-order regression model |
ARIMA (1, 1, 0) | 1 1 0 | Y ^ t = μ + Y t − 1 + Φ 1 ( Y t − 1 − Y t − 2 ) | Differenced first-order autoregressive model |
ARIMA (0, 1, 1) | 0 1 1 | Y ^ t = Y t − 1 − Φ 1 e t − 1 | Simple exponential smoothing |
ARIMA (0, 0, 1) | 0 0 1 | Y ^ t = μ 0 + ϵ t − ω 1 ϵ t − 1 | MA(1): First-order regression model |
ARIMA (0, 0, 2) | 0 0 2 | Y ^ t = μ 0 + ϵ t − ω 1 ϵ t − 1 − ω 2 ϵ t − 2 | MA(2): Second-order regression model |
ARIMA (1, 0, 1) | 1 0 1 | Y ^ t = Φ 0 + Φ 1 Y t − 1 + ϵ t − ω 1 ϵ t − 1 | ARMA model |
ARIMA (1, 1, 1) | 1 1 1 | Δ Y t = Φ 1 Y t − 1 + ϵ t − ω 1 ϵ t − 1 | ARIMA model |
ARIMA (1, 1, 2) | 1 1 2 | Y ^ t = Y t − 1 + Φ 1 ( Y t − 1 − Y t − 2 ) − Θ 1 e t − 1 − Θ 1 e t − 1 | Damped-trend linear Exponential smoothing |
ARIMA (0, 2, 1) OR (0,2,2) | 0 2 1 | Y ^ t = 2 Y t − 1 − Y t − 2 − Θ 1 e t − 1 − Θ 2 e t − 2 | Linear exponential smoothing |
Once we start combining components in this way to form more complicated models, it is much easier to work with the backshift notation. For example, Equation (1) can be written in backshift notation as:
Selecting appropriate values for p, d and q can be difficult. However,
the
AutoARIMA()
function from
statsforecast
will do it for you
automatically.
For more information
Using an
AutoARIMA()
model to model and predict time series has
several advantages, including:
Automation of the parameter selection process: The
AutoARIMA()
function automates the ARIMA model parameter selection process,
which can save the user time and effort by eliminating the need to
manually try different combinations of parameters.
Reduction of prediction error: By automatically selecting optimal parameters, the ARIMA model can improve the accuracy of predictions compared to manually selected ARIMA models.
Identification of complex patterns: The
AutoARIMA()
function can
identify complex patterns in the data that may be difficult to
detect visually or with other time series modeling techniques.
Flexibility in the choice of the parameter selection methodology: The ARIMA Model can use different methodologies to select the optimal parameters, such as the Akaike Information Criterion (AIC), cross-validation and others, which allows the user to choose the methodology that best suits their needs.
In general, using the
AutoARIMA()
function can help improve the
efficiency and accuracy of time series modeling and forecasting,
especially for users who are inexperienced with manual parameter
selection for ARIMA models.
We compared accuracy and speed against
pmdarima
, Rob Hyndman’s
forecast
package and
Facebook’s
Prophet
. We used the
Daily
,
Hourly
and
Weekly
data from the
M4
competition
.
The following table summarizes the results. As can be seen, our
auto_arima
is the best model in accuracy (measured by the
MASE
loss)
and time, even compared with the original implementation in R.
dataset | metric | auto_arima_nixtla | auto_arima_pmdarima [1] | auto_arima_r | prophet |
---|---|---|---|---|---|
Daily | MASE | 3.26 | 3.35 | 4.46 | 14.26 |
Daily | time | 1.41 | 27.61 | 1.81 | 514.33 |
Hourly | MASE | 0.92 | — | 1.02 | 1.78 |
Hourly | time | 12.92 | — | 23.95 | 17.27 |
Weekly | MASE | 2.34 | 2.47 | 2.58 | 7.29 |
Weekly | time | 0.42 | 2.92 | 0.22 | 19.82 |
[1] The model
auto_arima
from
pmdarima
had a problem with Hourly
data. An issue was opened.
The following table summarizes the data details.
group | n_series | mean_length | std_length | min_length | max_length |
---|---|---|---|---|---|
Daily | 4,227 | 2,371 | 1,756 | 107 | 9,933 |
Hourly | 414 | 901 | 127 | 748 | 1,008 |
Weekly | 359 | 1,035 | 707 | 93 | 2,610 |
Statsforecast will be needed. To install, see instructions .
Next, we import plotting libraries and configure the plotting style.
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
plt.style.use('fivethirtyeight')
plt.rcParams['lines.linewidth'] = 1.5
dark_style = {
'figure.facecolor': '#212946',
'axes.facecolor': '#212946',
'savefig.facecolor':'#212946',
'axes.grid': True,
'axes.grid.which': 'both',
'axes.spines.left': False,
'axes.spines.right': False,
'axes.spines.top': False,
'axes.spines.bottom': False,
'grid.color': '#2A3459',
'grid.linewidth': '1',
'text.color': '0.9',
'axes.labelcolor': '0.9',
'xtick.color': '0.9',
'ytick.color': '0.9',
'font.size': 12 }
plt.rcParams.update(dark_style)
from pylab import rcParams
rcParams['figure.figsize'] = (18,7)
Loading Data
df = pd.read_csv("https://raw.githubusercontent.com/Naren8520/Serie-de-tiempo-con-Machine-Learning/main/Data/candy_production.csv")
df.head()
observation_date IPG3113N 0 1972-01-01 85.6945 1 1972-02-01 71.8200 2 1972-03-01 66.0229 3 1972-04-01 64.5645 4 1972-05-01 65.0100
The input to StatsForecast is always a data frame in long format with
three columns: unique_id, ds and y:
The unique_id
(string, int or category) represents an identifier
for the series.
The ds
(datestamp) column should be of a format expected by
Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a
timestamp.
The y
(numeric) represents the measurement we wish to forecast.
df["unique_id"]="1"
df.columns=["ds", "y", "unique_id"]
df.head()
ds y unique_id 0 1972-01-01 85.6945 1 1 1972-02-01 71.8200 1 2 1972-03-01 66.0229 1 3 1972-04-01 64.5645 1 4 1972-05-01 65.0100 1
print(df.dtypes)
ds object
y float64
unique_id object
dtype: object
We need to convert ds
from the object
type to datetime.
df["ds"] = pd.to_datetime(df["ds"])
Explore data with the plot method
Plot a series using the plot method from the StatsForecast class. This
method prints a random series from the dataset and is useful for basic
from statsforecast import StatsForecast
StatsForecast.plot(df, engine="matplotlib")
Autocorrelation plots
fig, axs = plt.subplots(nrows=1, ncols=2)
plot_acf(df["y"], lags=60, ax=axs[0],color="fuchsia")
axs[0].set_title("Autocorrelation");
plot_pacf(df["y"], lags=60, ax=axs[1],color="lime")
axs[1].set_title('Partial Autocorrelation')
plt.show();
Decomposition of the time series
How to decompose a time series and why?
In time series analysis to forecast new values, it is very important to
know past data. More formally, we can say that it is very important to
know the patterns that values follow over time. There can be many
reasons that cause our forecast values to fall in the wrong direction.
Basically, a time series consists of four components. The variation of
those components causes the change in the pattern of the time series.
These components are:
Level: This is the primary value that averages over time.
Trend: The trend is the value that causes increasing or
decreasing patterns in a time series.
Seasonality: This is a cyclical event that occurs in a time
series for a short time and causes short-term increasing or
decreasing patterns in a time series.
Residual/Noise: These are the random variations in the time
series.
Combining these components over time leads to the formation of a time
series. Most time series consist of level and noise/residual and trend
or seasonality are optional values.
If seasonality and trend are part of the time series, then there will be
effects on the forecast value. As the pattern of the forecasted time
series may be different from the previous time series.
The combination of the components in time series can be of two types: *
Additive * multiplicative
Additive time series
If the components of the time series are added to make the time series.
Then the time series is called the additive time series. By
visualization, we can say that the time series is additive if the
increasing or decreasing pattern of the time series is similar
throughout the series. The mathematical function of any additive time
series can be represented by:
y(t)=level+Trend+seasonality+noise
Multiplicative time series
If the components of the time series are multiplicative together, then
the time series is called a multiplicative time series. For
visualization, if the time series is having exponential growth or
decline with time, then the time series can be considered as the
multiplicative time series. The mathematical function of the
multiplicative time series can be represented as.
y(t)=Level∗Trend∗seasonality∗Noise
from statsmodels.tsa.seasonal import seasonal_decompose
a = seasonal_decompose(df["y"], model = "add", period=12)
a.plot();
Split the data into training and testing
Let’s divide our data into sets 1. Data to train our AutoArima
model
2. Data to test our model
For the test data we will use the last 12 months to test and evaluate
the performance of our model.
Y_train_df = df[df.ds<='2016-08-01']
Y_test_df = df[df.ds>'2016-08-01']
Y_train_df.shape, Y_test_df.shape
((536, 3), (12, 3))
Now let’s plot the training data and the test data.
sns.lineplot(Y_train_df,x="ds", y="y", label="Train")
sns.lineplot(Y_test_df, x="ds", y="y", label="Test")
plt.show()
Implementation of AutoArima with StatsForecast
To also know more about the parameters of the functions of the
AutoARIMA Model
, they are listed below. For more information, visit
documentation
d : Optional[int]
Order of first-differencing.
D : Optional[int]
Order of seasonal-differencing.
max_p : int
Max autorregresives p.
max_q : int
Max moving averages q.
max_P : int
Max seasonal autorregresives P.
max_Q : int
Max seasonal moving averages Q.
max_order : int
Max p+q+P+Q value if not stepwise selection.
max_d : int
Max non-seasonal differences.
max_D : int
Max seasonal differences.
start_p : int
Starting value of p in stepwise procedure.
start_q : int
Starting value of q in stepwise procedure.
start_P : int
Starting value of P in stepwise procedure.
start_Q : int
Starting value of Q in stepwise procedure.
stationary : bool
If True, restricts search to stationary models.
seasonal : bool
If False, restricts search to non-seasonal models.
ic : str
Information criterion to be used in model selection.
stepwise : bool
If True, will do stepwise selection (faster).
nmodels : int
Number of models considered in stepwise search.
trace : bool
If True, the searched ARIMA models is reported.
approximation : Optional[bool]
If True, conditional sums-of-squares estimation, final MLE.
method : Optional[str]
Fitting method between maximum likelihood or sums-of-squares.
truncate : Optional[int]
Observations truncated series used in model selection.
test : str
Unit root test to use. See `ndiffs` for details.
test_kwargs : Optional[str]
Unit root test additional arguments.
seasonal_test : str
Selection method for seasonal differences.
seasonal_test_kwargs : Optional[dict]
Seasonal unit root test arguments.
allowdrift : bool (default True)
If True, drift models terms considered.
allowmean : bool (default True)
If True, non-zero mean models considered.
blambda : Optional[float]
Box-Cox transformation parameter.
biasadj : bool
Use adjusted back-transformed mean Box-Cox.
season_length : int
Number of observations per unit of time. Ex: 24 Hourly data.
alias : str
Custom name of the model.
prediction_intervals : Optional[ConformalIntervals]
Information to compute conformal prediction intervals.
By default, the model will compute the native prediction
intervals.
Load libraries
from statsforecast import StatsForecast
from statsforecast.models import AutoARIMA
from statsforecast.arima import arima_string
Instantiating Model
Import and instantiate the models. Setting the argument is sometimes
tricky. This article on Seasonal
periods) by the
master, Rob Hyndmann, can be useful.season_length
season_length = 12 # Monthly data
horizon = len(Y_test_df) # number of predictions
models = [AutoARIMA(season_length=season_length)]
We fit the models by instantiating a new StatsForecast object with the
following parameters:
models: a list of models. Select the models you want from models and
import them.
freq:
a string indicating the frequency of the data. (See panda’s
available frequencies.)
n_jobs:
n_jobs: int, number of jobs used in the parallel
processing, use -1 for all cores.
fallback_model:
a model to be used if a model fails.
Any settings are passed into the constructor. Then you call its fit
method and pass in the historical data frame.
sf = StatsForecast(df=Y_train_df,
models=models,
freq='MS',
n_jobs=-1)
Fit the Model
sf.fit()
StatsForecast(models=[AutoARIMA])
Once we have entered our model, we can use the
arima_string
function to see the parameters that the model has found.
arima_string(sf.fitted_[0,0].model_)
'ARIMA(1,0,0)(0,1,2)[12] '
The automation process gave us that the best model found is a model of
the form ARIMA(1,0,0)(0,1,2)[12]
, this means that our model contains
p=1 , that is, it has a non-seasonal autogressive element, on the
other hand, our model contains a seasonal part, which has an order of
D=1, that is, it has a seasonal differential, and q=2 that contains
2 moving average element.
To know the values of the terms of our model, we can use the following
statement to know all the result of the model made.
result=sf.fitted_[0,0].model_
print(result.keys())
print(result['arma'])
dict_keys(['coef', 'sigma2', 'var_coef', 'mask', 'loglik', 'aic', 'arma', 'residuals', 'code', 'n_cond', 'nobs', 'model', 'bic', 'aicc', 'ic', 'xreg', 'x', 'lambda'])
(1, 0, 0, 2, 12, 0, 1)
Let us now visualize the residuals of our models.
As we can see, the result obtained above has an output in a dictionary,
to extract each element from the dictionary we are going to use the
.get()
function to extract the element and then we are going to save
it in a pd.DataFrame()
.
residual=pd.DataFrame(result.get("residuals"), columns=["residual Model"])
residual
residual Model 0 0.085694 1 0.071820 2 0.066023 … … 533 1.258873 534 1.585062 535 -6.199166
fig, axs = plt.subplots(nrows=2, ncols=2)
# plot[1,1]
residual.plot(ax=axs[0,0])
axs[0,0].set_title("Residuals");
# plot
sns.distplot(residual, ax=axs[0,1]);
axs[0,1].set_title("Density plot - Residual");
# plot
stats.probplot(residual["residual Model"], dist="norm", plot=axs[1,0])
axs[1,0].set_title('Plot Q-Q')
# plot
plot_acf(residual, lags=35, ax=axs[1,1],color="fuchsia")
axs[1,1].set_title("Autocorrelation");
plt.show();
To generate forecasts we only have to use the predict method specifying
the forecast horizon (h). In addition, to calculate prediction intervals
associated to the forecasts, we can include the parameter level that
receives a list of levels of the prediction intervals we want to build.
In this case we will only calculate the 90% forecast interval
(level=[90]).
Forecast Method
If you want to gain speed in productive settings where you have multiple
series or models we recommend using the
StatsForecast.forecast
method instead of .fit
and .predict
.
The main difference is that the .forecast
doest not store the fitted
values and is highly scalable in distributed environments.
The forecast method takes two arguments: forecasts next h
(horizon)
and level
.
h (int):
represents the forecast h steps into the future. In this
case, 12 months ahead.
level (list of floats):
this optional parameter is used for
probabilistic forecasting. Set the level (or confidence percentile)
of your prediction interval. For example, level=[90]
means that
the model expects the real value to be inside that interval 90% of
the times.
The forecast object here is a new data frame that includes a column with
the name of the model and the y hat values, as well as columns for the
uncertainty intervals. Depending on your computer, this step should take
around 1min. (If you want to speed things up to a couple of seconds,
remove the AutoModels like
ARIMA
Theta
)
Y_hat_df = sf.forecast(horizon, fitted=True)
Y_hat_df.head()
ds AutoARIMA unique_id 1 2016-09-01 109.955437 1 2016-10-01 121.920509 1 2016-11-01 122.458389 1 2016-12-01 120.562027 1 2017-01-01 106.864670
values=sf.forecast_fitted_values()
values
ds y AutoARIMA unique_id 1 1972-01-01 85.694504 85.608803 1 1972-02-01 71.820000 71.748177 1 1972-03-01 66.022903 65.956879 … … … … 1 2016-06-01 102.404404 101.145523 1 2016-07-01 102.951202 101.366135 1 2016-08-01 104.697701 110.896866
Adding 95% confidence interval with the forecast method
sf.forecast(h=12, level=[95])
ds AutoARIMA AutoARIMA-lo-95 AutoARIMA-hi-95 unique_id 1 2016-09-01 109.955437 102.116188 117.794685 1 2016-10-01 121.920509 112.380608 131.460403 1 2016-11-01 122.458389 112.200500 132.716278 … … … … … 1 2017-06-01 96.751160 85.873802 107.628525 1 2017-07-01 97.451607 86.572372 108.330833 1 2017-08-01 103.420616 92.540489 114.300743
Y_hat_df=Y_hat_df.reset_index()
Y_hat_df
unique_id ds AutoARIMA 0 1 2016-09-01 109.955437 1 1 2016-10-01 121.920509 2 1 2016-11-01 122.458389 … … … … 9 1 2017-06-01 96.751160 10 1 2017-07-01 97.451607 11 1 2017-08-01 103.420616
Y_test_df['unique_id'] = Y_test_df['unique_id'].astype(int)
Y_hat_df = Y_test_df.merge(Y_hat_df, how='left', on=['unique_id', 'ds'])
fig, ax = plt.subplots(1, 1, figsize = (18, 7))
plot_df = pd.concat([Y_train_df, Y_hat_df]).set_index('ds')
plot_df[['y', 'AutoARIMA']].plot(ax=ax, linewidth=2)
ax.set_title(' Forecast', fontsize=22)
ax.set_ylabel('Monthly ', fontsize=20)
ax.set_xlabel('Timestamp [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid()
Predict method with confidence interval
To generate forecasts use the predict method.
The predict method takes two arguments: forecasts the next h
(for
horizon) and level
.
h (int):
represents the forecast h steps into the future. In this
case, 12 months ahead.
level (list of floats):
this optional parameter is used for
probabilistic forecasting. Set the level (or confidence percentile)
of your prediction interval. For example, level=[95]
means that
the model expects the real value to be inside that interval 95% of
the times.
The forecast object here is a new data frame that includes a column with
the name of the model and the y hat values, as well as columns for the
uncertainty intervals.
This step should take less than 1 second.
sf.predict(h=12)
ds AutoARIMA unique_id 1 2016-09-01 109.955437 1 2016-10-01 121.920509 1 2016-11-01 122.458389 … … … 1 2017-06-01 96.751160 1 2017-07-01 97.451607 1 2017-08-01 103.420616
forecast_df = sf.predict(h=12, level = [80, 95])
forecast_df
ds AutoARIMA AutoARIMA-lo-95 AutoARIMA-lo-80 AutoARIMA-hi-80 AutoARIMA-hi-95 unique_id 1 2016-09-01 109.955437 102.116188 104.829628 115.081245 117.794685 1 2016-10-01 121.920509 112.380608 115.682701 128.158310 131.460403 1 2016-11-01 122.458389 112.200500 115.751114 129.165665 132.716278 … … … … … … … 1 2017-06-01 96.751160 85.873802 89.638840 103.863487 107.628525 1 2017-07-01 97.451607 86.572372 90.338058 104.565147 108.330833 1 2017-08-01 103.420616 92.540489 96.306480 110.534752 114.300743
We can join the forecast result with the historical data using the
pandas function pd.concat()
, and then be able to use this result for
graphing.
df_plot=pd.concat([df, forecast_df]).set_index('ds').tail(220)
df_plot
y unique_id AutoARIMA AutoARIMA-lo-95 AutoARIMA-lo-80 AutoARIMA-hi-80 AutoARIMA-hi-95 ds 2000-05-01 108.7202 1 NaN NaN NaN NaN NaN 2000-06-01 114.2071 1 NaN NaN NaN NaN NaN 2000-07-01 111.8737 1 NaN NaN NaN NaN NaN … … … … … … … … 2017-06-01 NaN NaN 96.751160 85.873802 89.638840 103.863487 107.628525 2017-07-01 NaN NaN 97.451607 86.572372 90.338058 104.565147 108.330833 2017-08-01 NaN NaN 103.420616 92.540489 96.306480 110.534752 114.300743
Now let’s visualize the result of our forecast and the historical data
of our time series, also let’s draw the confidence interval that we have
obtained when making the prediction with 95% confidence.
fig, ax = plt.subplots(1, 1, figsize = (20, 8))
plt.plot(df_plot['y'], 'k--', df_plot['AutoARIMA'], 'b-', linewidth=2 ,label="y")
plt.plot(df_plot['AutoARIMA'], 'b-', color="red", linewidth=2, label="AutoArima")
# Specify graph features:
ax.fill_between(df_plot.index,
df_plot['AutoARIMA-lo-80'],
df_plot['AutoARIMA-hi-80'],
alpha=.20,
color='lime',
label='AutoARIMA_level_80')
ax.fill_between(df_plot.index,
df_plot['AutoARIMA-lo-95'],
df_plot['AutoARIMA-hi-95'],
alpha=.2,
color='white',
label='AutoARIMA_level_95')
ax.set_title('', fontsize=20)
ax.set_ylabel('Production', fontsize=15)
ax.set_xlabel('Month', fontsize=15)
ax.legend(prop={'size': 15})
ax.grid(True)
plt.show()
Let’s plot the same graph using the plot function that comes in
Statsforecast
, as shown below.
sf.plot(df, forecast_df, level=[95])
Cross-validation
In previous steps, we’ve taken our historical data to predict the
future. However, to asses its accuracy we would also like to know how
the model would have performed in the past. To assess the accuracy and
robustness of your models on your data perform Cross-Validation.
With time series data, Cross Validation is done by defining a sliding
window across the historical data and predicting the period following
it. This form of cross-validation allows us to arrive at a better
estimation of our model’s predictive abilities across a wider range of
temporal instances while also keeping the data in the training set
contiguous as is required by our models.
The following graph depicts such a Cross Validation Strategy:
Perform time series cross-validation
Cross-validation of time series models is considered a best practice but
most implementations are very slow. The statsforecast library implements
cross-validation as a distributed operation, making the process less
time-consuming to perform. If you have big datasets you can also perform
Cross Validation in a distributed cluster using Ray, Dask or Spark.
In this case, we want to evaluate the performance of each model for the
last 5 months (n_windows=5)
, forecasting every second months
(step_size=12)
. Depending on your computer, this step should take
around 1 min.
The cross_validation method from the StatsForecast class takes the
following arguments.
df:
training data frame
h (int):
represents h steps into the future that are being
forecasted. In this case, 12 months ahead.
step_size (int):
step size between each window. In other words:
how often do you want to run the forecasting processes.
n_windows(int):
number of windows used for cross validation. In
other words: what number of forecasting processes in the past do you
want to evaluate.
crossvalidation_df = sf.cross_validation(df=Y_train_df,
h=12,
step_size=12,
n_windows=5)
The crossvaldation_df object is a new data frame that includes the
following columns:
unique_id:
index. If you dont like working with index just run
crossvalidation_df.resetindex()
ds:
datestamp or temporal index
cutoff:
the last datestamp or temporal index for the n_windows.
y:
true value
"model":
columns with the model’s name and fitted value.
crossvalidation_df.head()
ds cutoff y AutoARIMA unique_id 1 2011-09-01 2011-08-01 93.906197 104.758850 1 2011-10-01 2011-08-01 116.763397 118.705879 1 2011-11-01 2011-08-01 116.825798 116.834129 1 2011-12-01 2011-08-01 114.956299 117.070084 1 2012-01-01 2011-08-01 99.966202 103.552246
Model Evaluation
We can now compute the accuracy of the forecast using an appropiate
accuracy metric. Here we’ll use the Root Mean Squared Error (RMSE). To
do this, we first need to install datasetsforecast
, a Python library
developed by Nixtla that includes a function to compute the RMSE.
!pip install datasetsforecast
from datasetsforecast.losses import rmse
The function to compute the RMSE takes two arguments:
The actual values.
The forecasts, in this case, AutoArima.
rmse = rmse(crossvalidation_df['y'], crossvalidation_df["AutoARIMA"])
print("RMSE using cross-validation: ", rmse)
RMSE using cross-validation: 5.5258384
As you have noticed, we have used the cross validation results to
perform the evaluation of our model.
Now we are going to evaluate our model with the results of the
predictions, we will use different types of metrics MAE, MAPE, MASE,
RMSE, SMAPE to evaluate the accuracy.
from datasetsforecast.losses import mae, mape, mase, rmse, smape
def evaluate_performace(y_hist, y_true, model):
evaluation = {}
evaluation[model] = {}
for metric in [mase, mae, mape, rmse, smape]:
metric_name = metric.__name__
if metric_name == 'mase':
evaluation[model][metric_name] = metric(y_true['y'].values,
y_true[model].values,
y_hist['y'].values, seasonality=12)
else:
evaluation[model][metric_name] = metric(y_true['y'].values, y_true[model].values)
return pd.DataFrame(evaluation).T