Forecast Error

The difference between actual demand and forecast demand, stated as an absolute value or as a percentage. E.g., average forecast error, forecast accuracy, mean absolute deviation, tracking signal. There are three ways to accommodate forecasting errors: One is to try to reduce the error through better forecasting. The second is to build more visibility and flexibility into the supply chain. And the third is to reduce the lead time over which forecasts are required.

A forecast error is the difference between the actual or real and the predicted or forecast value of a time series or any other phenomenon of interest. Since the forecast error is derived from the same scale of data, comparisons between the forecast errors of different series can only be made when the series are on the same scale.

In simple cases, a forecast is compared with an outcome at a single time-point and a summary of forecast errors is constructed over a collection of such time-points. Here the forecast may be assessed using the difference or using a proportional error. By convention, the error is defined using the value of the outcome minus the value of the forecast.

In other cases, a forecast may consist of predicted values over a number of lead-times; in this case an assessment of forecast error may need to consider more general ways of assessing the match between the time-profiles of the forecast and the outcome. If a main application of the forecast is to predict when certain thresholds will be crossed, one possible way of assessing the forecast is to use the timing-error—the difference in time between when the outcome crosses the threshold and when the forecast does so. When there is interest in the maximum value being reached, assessment of forecasts can be done using any of:

  • the difference of times of the peaks;
  • the difference in the peak values in the forecast and outcome;
  • the difference between the peak value of the outcome and the value forecast for that time point.

Forecast error can be a calendar forecast error or a cross-sectional forecast error, when we want to summarize the forecast error over a group of units. If we observe the average forecast error for a time-series of forecasts for the same product or phenomenon, then we call this a calendar forecast error or time-series forecast error. If we observe this for multiple products for the same period, then this is a cross-sectional performance error. Reference class forecasting has been developed to reduce forecast error. Combining forecasts has also been shown to reduce forecast error

Common forecasting errors

  • Anchoring and adjusting – When making an estimate people often start with an initial value – the anchor – and then adjust from this. But estimates tend to be too close to the anchor, even if it is an implausible value. Anchoring can cause people to underestimate upward trends because they stay too close to the most recent value, which can be particularly severe for exponential trends. The solution is, Use statistical methods, rather than judgment, to forecast trends.
  • Seeing patterns in randomness – Human beings have a tendency to see systematic patterns even where there are none. People love storytelling and are brilliant at inventing explanation for random movements in graphs. The solution is, don’t believe that you can do a better job than your forecasting system.
  • Putting your calling card on the forecast – People tend to make many small adjustments and only a few large adjustments to their forecasts. But small adjustments to software’s forecasts waste time and often reduce accuracy. The solution is, Only adjust for important reasons, and document the reasons.
  • Attaching too much weight to judgment relative to statistical forecasts – Even though evidence shows judgment is less accurate than statistical forecasts people continue to rely on their judgment. The solution is, Have more confidence in statistical methods and adjust them only when you are sure it is absolutely necessary. Or, take a simple average of the statistical forecast and your independent judgmental forecast so they are equally weighed.
  • Recency bias – Companies often don’t want to use data that goes more than a few years back because the trends were different. But statistical methods embedded in software need plenty of data to give reliable forecasts. The solution is, Give your software a chance, then you might not need to adjust its forecasts. Many statistical methods are designed to adapt to changes in trends, and they are far less likely than human judges to see false new trends in recent data.
  • Optimism bias – People have an innate bias toward optimism, psychologists say. Even though optimism bias is a sign of good mental health, negative (downward) adjustments are more successful than positive. The solution is, Break complex judgments into smaller parts, for example adjust for price reduction, promotion and new customers separately instead of making a total adjustment. And consider keeping a database of estimated effects of past special events like sales promotions.
  • Political bias – Adjustment to forecasts based on internal political bias can make marketing managers look good when they provide overly optimistic forecasts. The solution is, Require adjustments to be documented and review their effect on accuracy. Argue for relying on statistical methods for forecasts unless very special circumstances apply.
  • Confusing forecasts with decisions – A forecast is a best estimate of what will happen in the future: “I think we’ll sell 200 units.” A decision is a number designed to achieve one or more objectives: “I think we should produce 250 units in case demand is unexpectedly high, to balance the possibility of lost sales with the costs of holding more stock.” The solution is, Separate the forecast from the decision. Obtain your forecast first, label it as a forecast and then use it as a basis for your decision.
  • Group biases – Having statistical forecasts adjusted by a group of people can be dangerous as most people don’t feel comfortable going against group decisions. The solution is, Use the Delphi Method, in which panelists provide forecasts individually and privately. The results of the polling are then tallied and statistics fed back. Re-polling takes place and the process is repeated until consensus emerges. Median estimate is then used as the forecast.
Other Forecasting Methods
Forecast Evaluations

Get industry recognized certification – Contact us

keyboard_arrow_up