Abstract
Previous work has shown that combinations of separate forecasts produced by judgment are inferior to those produced by simple averaging. However, in that research judges were not informed of outcomes after producing each combined forecast. Our first experiment shows that when they are given this information, they learn to weight the separate forecasts appropriately. However, their judgments, though improved, are still not significantly better than the simple average because they contain a random error component. Bootstrapping can be used to remove this inconsistency and produce results that outperform the average. In our second and third experiments, we provided judges with information about errors made by the individual forecasters. Results show that providing information about their mean absolute percentage errors updated each period enables judges to combine their forecasts in a way that outperforms the simple average.
Original language | English |
---|---|
Pages (from-to) | 227-246 |
Number of pages | 20 |
Journal | International Journal of Forecasting |
Volume | 15 |
Issue number | 3 |
DOIs | |
State | Published - 1 Jan 1999 |
Externally published | Yes |
Keywords
- Combining forecasts
- Feedback
- Forecasting
- Information integration
- Judgment
ASJC Scopus subject areas
- Business and International Management