An MDP-based recommender system

Guy Shani, David Heckerman, Ronen I. Brafman

Research output: Contribution to journalArticlepeer-review

689 Scopus citations

Abstract

Typical recommender systems adopt a static view of the recommendation process and treat it as a prediction problem. We argue that it is more appropriate to view the problem of generating recommendations as a sequential optimization problem and, consequently, that Markov decision processes (MDPs) provide a more appropriate model for recommender systems. MDPs introduce two benefits: they take into account the long-term effects of each recommendation and the expected value of each recommendation. To succeed in practice, an MDP-based recommender system must employ a strong initial model, must be solvable quickly, and should not consume too much memory. In this paper, we describe our particular MDP model, its initialization using a predictive model, the solution and update algorithm, and its actual performance on a commercial site. We also describe the particular predictive model we used which outperforms previous models. Our system is one of a small number of commercially deployed recommender systems. As far as we know, it is the first to report experimental analysis conducted on a real commercial site. These results validate the commercial value of recommender systems, and in particular, of our MDP-based approach.

Original languageEnglish
JournalJournal of Machine Learning Research
Volume6
StatePublished - 27 Sep 2005

Keywords

  • Commercial applications
  • Learning
  • Markov decision processes
  • Recommender systems

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'An MDP-based recommender system'. Together they form a unique fingerprint.

Cite this