Combining forecasts: What information do judges need to outperform the simple average?

Ilan Fischer, Nigel Harvey

Research output: Contribution to journalArticlepeer-review

64 Scopus citations

Abstract

Previous work has shown that combinations of separate forecasts produced by judgment are inferior to those produced by simple averaging. However, in that research judges were not informed of outcomes after producing each combined forecast. Our first experiment shows that when they are given this information, they learn to weight the separate forecasts appropriately. However, their judgments, though improved, are still not significantly better than the simple average because they contain a random error component. Bootstrapping can be used to remove this inconsistency and produce results that outperform the average. In our second and third experiments, we provided judges with information about errors made by the individual forecasters. Results show that providing information about their mean absolute percentage errors updated each period enables judges to combine their forecasts in a way that outperforms the simple average.

Original languageEnglish
Pages (from-to)227-246
Number of pages20
JournalInternational Journal of Forecasting
Volume15
Issue number3
DOIs
StatePublished - 1 Jan 1999
Externally publishedYes

Keywords

  • Combining forecasts
  • Feedback
  • Forecasting
  • Information integration
  • Judgment

ASJC Scopus subject areas

  • Business and International Management

Fingerprint

Dive into the research topics of 'Combining forecasts: What information do judges need to outperform the simple average?'. Together they form a unique fingerprint.

Cite this