## 9.10 ARIMA vs ETS

It is a commonly held myth that ARIMA models are more general than exponential smoothing. While linear exponential smoothing models are all special cases of ARIMA models, the non-linear exponential smoothing models have no equivalent ARIMA counterparts. On the other hand, there are also many ARIMA models that have no exponential smoothing counterparts. In particular, all ETS models are non-stationary, while some ARIMA models are stationary.

The ETS models with seasonality or non-damped trend or both have two unit roots (i.e., they need two levels of differencing to make them stationary). All other ETS models have one unit root (they need one level of differencing to make them stationary).

Table 9.3 gives the equivalence relationships for the two classes of models. For the seasonal models, the ARIMA parameters have a large number of restrictions.

ETS model | ARIMA model | Parameters |
---|---|---|

ETS(A,N,N) | ARIMA(0,1,1) | \(\theta_1=\alpha-1\) |

ETS(A,A,N) | ARIMA(0,2,2) | \(\theta_1=\alpha+\beta-2\) |

\(\theta_2=1-\alpha\) | ||

ETS(A,A\(_d\),N) | ARIMA(1,1,2) | \(\phi_1=\phi\) |

\(\theta_1=\alpha+\phi\beta-1-\phi\) | ||

\(\theta_2=(1-\alpha)\phi\) | ||

ETS(A,N,A) | ARIMA(0,1,\(m\))(0,1,0)\(_m\) | |

ETS(A,A,A) | ARIMA(0,1,\(m+1\))(0,1,0)\(_m\) | |

ETS(A,A\(_d\),A) | ARIMA(0,1,\(m+1\))(0,1,0)\(_m\) |

The AICc is useful for selecting between models in the same class. For example, we can use it to select an ARIMA model between candidate ARIMA models^{16} or an ETS model between candidate ETS models. However, it cannot be used to compare between ETS and ARIMA models because they are in different model classes, and the likelihood is computed in different ways. The examples below demonstrate selecting between these classes of models.

### Example: Comparing `ARIMA()`

and `ETS()`

on non-seasonal data

We can use time series cross-validation to compare an ARIMA model and an ETS model. Let’s consider the Australian population from the `global_economy`

dataset, as introduced in Section 8.2.

```
aus_economy <- global_economy %>% filter(Code == "AUS") %>%
mutate(Population = Population/1e6)
aus_economy %>%
slice(-n()) %>%
stretch_tsibble(.init = 10) %>%
model(
ETS(Population),
ARIMA(Population)
) %>%
forecast(h = 1) %>%
accuracy(aus_economy)
#> # A tibble: 2 x 10
#> .model Country .type ME RMSE MAE MPE MAPE MASE ACF1
#> <chr> <fct> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 ARIMA(Population) Austral… Test 0.0420 0.194 0.0789 0.277 0.509 0.317 0.188
#> 2 ETS(Population) Austral… Test 0.0202 0.0774 0.0543 0.112 0.327 0.218 0.506
```

In this case the ETS model has higher accuracy on the cross-validated performance measures. Below we generate and plot forecasts for the next 5 years generated from an ETS model.

### Example: Comparing `ARIMA()`

and `ETS()`

on seasonal data

In this case we want to compare seasonal ARIMA and ETS models applied to the quarterly cement production data (from `aus_production`

). Because the series is relatively long, we can afford to use a training and a test set rather than time series cross-validation. The advantage is that this is much faster. We create a training set from the beginning of 1988 to the end of 2007 and select an ARIMA and an ETS model using the `ARIMA()`

and `ETS()`

functions.

```
# Consider the cement data beginning in 1988
cement <- aus_production %>%
filter(year(Quarter) >= 1988)
# Use 20 years of the data as the training set
train <- cement %>%
filter(year(Quarter) <= 2007)
```

The output below shows the ARIMA model selected and estimated by `ARIMA()`

. The ARIMA model does well in capturing all the dynamics in the data as the residuals seem to be white noise.

```
#> Series: Cement
#> Model: ARIMA(1,0,1)(2,1,1)[4] w/ drift
#>
#> Coefficients:
#> ar1 ma1 sar1 sar2 sma1 constant
#> 0.8886 -0.2366 0.081 -0.2345 -0.8979 5.388
#> s.e. 0.0842 0.1334 0.157 0.1392 0.1780 1.484
#>
#> sigma^2 estimated as 11456: log likelihood=-463.5
#> AIC=941 AICc=942.7 BIC=957.4
augment(fit_arima) %>%
gg_tsdisplay(.resid, lag_max = 16, plot_type = "hist")
```

The output below also shows the ETS model selected and estimated by `ETS()`

. This model also does well in capturing all the dynamics in the data, as the residuals similarly appear to be white noise.

```
#> Series: Cement
#> Model: ETS(M,N,M)
#> Smoothing parameters:
#> alpha = 0.7534
#> gamma = 1e-04
#>
#> Initial states:
#> l s1 s2 s3 s4
#> 1695 1.031 1.045 1.011 0.9122
#>
#> sigma^2: 0.0034
#>
#> AIC AICc BIC
#> 1104 1106 1121
augment(fit_ets) %>%
gg_tsdisplay(.resid, lag_max = 16, plot_type = "hist")
```

The output below evaluates the forecasting performance of the two competing models over the test set. In this case the ARIMA model seems to be the slightly more accurate model based on the test set RMSE, MAPE and MASE.

```
# Generate forecasts and compare accuracy over the test set
bind_rows(
fit_arima %>% accuracy(),
fit_ets %>% accuracy(),
fit_arima %>% forecast(h = "2 years 6 months") %>%
accuracy(cement),
fit_ets %>% forecast(h = "2 years 6 months") %>%
accuracy(cement)
)
#> # A tibble: 4 x 9
#> .model .type ME RMSE MAE MPE MAPE MASE ACF1
#> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 ARIMA(Cement) Training -6.21 100. 79.9 -0.670 4.37 0.546 -0.0113
#> 2 ETS(Cement) Training 12.8 103. 80.0 0.427 4.41 0.547 -0.0528
#> 3 ARIMA(Cement) Test -161. 216. 186. -7.71 8.68 1.27 0.387
#> 4 ETS(Cement) Test -171. 222. 191. -8.07 8.85 1.30 0.579
```

Notice that the ETS model fits the training data slightly better than the ARIMA model, but that the ARIMA model provides more accurate forecasts on the test set. A good fit to training data is never an indication that the model will forecast well. Below we generate and plot forecasts from an ETS model for the next 3 years.

```
# Generate forecasts from an ETS model
cement %>% model(ETS(Cement)) %>% forecast(h="3 years") %>% autoplot(cement)
```