, 1 min read
Recommender systems: where are we headed?
Daniel Tunkelang comments on the recent progress in collaborative filtering:
(…) the machine learning community, much like the information retrieval community, generally prefers black box approaches, (…) If the goal is to optimize one-shot recommendations, they are probably right. But I maintain that the process of picking a movie, like most information seeking tasks, is inherently interactive, (…)
I disagree with him. Even for non-interactive recommendations, the Machine Learning community is off-track for two reasons:
- They fail to take into account diversity. In Information Retrieval, we know that if precision is high (all documents are relevant) but recall is low (few of the relevant documents are presented), then the system is poor. There is no such balance in collaborative filtering. Precision above all else is the goal. This is wrong. Diversity metrics must be used.
- They work over static data sets. A system like Netflix is not static and so, accuracy on a static data set might be a good predictor for real-world performance. The problem is intrinsically nonlinear. People will rate different items, and they will rate differently, if you change the recommender system. The feedback loop may work against you or in your favour. The effect might be large or small. As far as I can tell, I am the only one who keep pointing out this fundamental, but never addressed limitation of working over static data sets. Update: This has absolutely nothing to do with online versus batch algorithms.
See also my post Netflix: an interesting Machine Learning game, but is it good science?
Note: I organized the ACM KDD Workshop on Large-Scale Recommender Systems and the Netflix Prize Competition along with people like Yehuda Koren. Yahuda is among the candidates to win the Netflix prize. I do not oppose the Netflix competition. I just do not think that it will solve our big problems.