Autoregressive Integrated Moving Average (ARIMA) is one of the most popular technique for time series modeling. This is also called Box-Jenkins method, named after the statisticians who pioneered some of the latest developments on this technique.

We will focus on following broad areas-

- What is a time series? We have covered this in another article. Click here
- Explore a time series data.
*Please refer to the slides 2 to 7 of the below deck*and Click here - What is an ARIMA modeling
- Discuss stationarity of a time series
- Fit an ARIMA model, evaluate model’s accuracy and forecast for future

**What is an ARIMA modeling-**

An ARIMA model has following main components. However, not all models need to have all of the below mentioned components.

- Autoregressive (AR)

Value of a time series at time period t (yt) is a function of values of time series at previous time periods ‘p’

*yt = Linear function of yt-1, yt-2,….., yt-p + error*

- Integrated (I)

To make a time series stationary (discussed below), sometimes we need to difference successive observation and model that. This process is known as integration and differencing order is represented as ‘d’ in an ARIMA model.

- Moving Average (MA)

Value of a time series at time period t (yt) is a function of errors at previous time periods ‘q’

*yt = Linear function of Et-1, Et-2,….., Et-q + error*

Based on the combinations of the above factors, we can have following and other models-

- AR- Only autoregressives terms
- MA- Only moving averages terms
- ARMA- Both autoregressive and moving average terms
- ARIMA- Autoregressive, moving average terms and integration terms. After the differencing step, the model becomes ARMA

A general ARIMA model is represented as ARIMA(p,d,q) where p, d and q represent AR, Integrated and moving averages respectively. Whereas each of p,d and q are integers higher than or equal to zero.

**Stationarity of a time series- **

A time series is called stationary where it has a constant mean and variance across the time period, i.e. mean and variance don’t depend on time. It other words, it should not have any trend and dispersion in variance of the data over a period of time. This is also called white noise.

*Please refer to slides 8 to 11 of the below deck for live examples of this discussion*

From the plot of our air passengers time series, we can tell that the time series is not stationary. Moreover, a time series needs to be stationary or made stationary before being fed into ARIMA modeling.

Statistically, Augmented Dickey–Fuller test is used for testing the stationarity of a time series. Generally speaking the null hypothesis (H0) is that the series is “Non-Stationary” and the alternative hypothesis (Ha) is that series is “Stationary”.

If the p statistics generated from the test is less than <0.05 we can reject the null hypothesis. Otherwise, we need to accept the null hypothesis.

From the ADF test we can see that the p values is close to 0.78 and which is more than 0.05 and hence we need to accept the null hypothesis that is the series is “Non Stationary”

How do we make a time series stationary? Well, we can do it two ways-

- Manual- Transformation and Differecing etc. Let’s look at an example.
- Automated- The Integrated term (d)in the ARIMA will make it stationary. This we will do in the model fitting phase. Generally speaking we don’t require d>1 to make a time series stationary
- Auto.arima ( ) will take care of this automatically and fit the best model

**Fit a model, evaluate model’s accuracy and forecast**

We will use auto.arima ( ) to fit the best model and evaluate model fitment and performance using following main parameters.

*Please refer to slides 12-18 of the below deck*

A good time series model should have following characteristics-

- Residuals shouldn’t show any trends over time.
- Auto correlation Factors(ACF) and Partial Auto correlation Factor (PACF) shouldn’t have large values (beyond significance level)for any lags. ACF indicates correlation between the current value to all the previous values in a range. PACF is an extension of ACF, where it removes the correlation of the intermediate lags. You can read more on this here.
- Errors shouldn’t show any seasonality
- Errors should be normally distributed
- Error (MAE, MAPE, MSE etc.) should be low
- AIC, BIC should be relatively lower compared to other alternative models.

For those who would like to read more about the time series analysis in R, here is an excellent free book.

Thank you!

Hi RP, Can you please give some examples on when we consider to use seasonal (Holt-Winters, SARIMA) and non-seasonal (ARIMA) time series models? Actually, I am trying to understand the use cases when to consider seasonality in modeling and when not to. Thanks in advance.

Hi Shyam:

Typically when you do a time series decomposition as mentioned in the Holt Winters blog, you will know whether there is a strong seasonality in your data. This would also be evident based on domain expertise.

Regarding which techniques to try when… I would suggest you give all of them a try and whichever gives you the lowest error that should be the one that you select.

Hope this helps

Thanks RP…this helps.

One more point-

Generally seasonality is quite evident by observing data only. For example- retails sales in US will peak during Thanksgiving to Christmas and in India during Diwali. Similarly, sales of stationary and note books etc peaks during the back to school seasons.

On the other hand, commodity products such as sales of toothpaste will not demonstrate seasonality.

Best, RP