Using the Exponential Smoothing Approach to Time Series Forecasting on 6 DOF Tracking Data

Maurice R. Masliah

Gaining insight into human coordination for complex tasks, perhaps, can be accomplished by using time series analysis to study 6 degree of freedom tracking data. This section explores the use of simple exponential smoothing as a forecasting method for tracking data.

Regression and simple exponential smoothing are two methods for forecasting time series with no trend (upward or downward movement that characterizes a time series over a period of time, i.e. long-run growth or decline). Regression is the method of fitting straight lines through least squares estimates. The least squares estimate of the average level of a time series with no trend is [1]

T represents the time domain

y1, y2,…,yT denote a set of observations for a time series

b0(T) the least squares estimate of the average level of the time series for time period T

 

As a new data point, yT, is observed, a new estimate of the average level of the series can be made by recalculating the mean of y.

Simple exponential smoothing, on the other hand, generates new estimates by adjusting the forecast up or down based on the magnitude of the previous forecast error. The forecast error is given by

eT is the difference between the new observation T and the estimate for it based on the data through period T-1. Simple exponential smoothing generates an estimate b0(T) by modifying the old estimate b0(T-1) by a fraction of the forecast error eT, such that

where a is the fraction. The fraction a is called the smoothing constant. For simplicity, and to follow the notation used by [1], we shall define ST=b0(T). ST, called the smoothed estimate or smoothed statistic, is equal to

 

In practice, ST is computed recursively in the following fashion

One way to compute the initial estimate, S0, is take the average of the first several observations.

 

Optimizing the method of simple exponential smoothing then becomes a matter of determining the best smoothing constant a . Large values of the smoothing constant corresponds to quickly damping out the effects of older observations, while small values of a puts stronger weight on older observations. The best smoothing constant can by found by computing the squared errors between the forecasts and the actual observations for different values of a , and selecting the a which minimizes the sum of the squared errors.

Bowerman & O’Connell [1] (p. 127-130) state that in practice smoothing constants between 0.01 to 0.3 usually work quite well. If the simulation of historical data in an effort to determine an optimal smoothing constant indicates that the "best" smoothing constant is greater than 0.3, then it is possible that the values in the time series are dependent upon each other. This dependency may be captured by time series methods which analyze the autocorrelations of data.

How about 6 dof tracking data? Are the individual observations dependent upon one another, or is simple exponential smoothing an appropriate method for forecasting human tracking error? The data for one 40 second tracking trial will be analyzed here as an example. Here is a plot of the tracking errors for a 6 dof tracking experiment:

 

 

 

These time series are made up of tracking errors, where the data plotted equals the operator’s cursor position minus the target position for each degree of freedom over time. To generate these series, cursor and target positions were collected every 0.05 seconds of a 40 second tracking trial using the Spaceball (isometric input).

For determining the best smoothing constant, S0 was computed to be the average value of the series over the first second (about 20 observations). The sum of the squared errors for different values of a were computed over the last 39 seconds and are plotted in the next figure.

 

 

 

From the general shape of these graphs, one can see that as the smoothing constant gets larger, the sum of squares gets smaller. The "best" value for the smoothing constant a is not within the 0.01 and 0.3 range. Even though only one trial is depicted here, these results are consistent across all tracking trials.

This analysis is being used as evidence that a time series analysis which looks at the autocorrelations of the data may provide insight into the process behind the data.

 

 

 

Reference

[1] B. L. Bowerman and R. T. O'Connell, Time Series and Forecasting. North Scituate, Massachusetts: Duxbury Press, 1979.

View/Add Comments