Congratulations to the winners.
I realize that the end product resulting
from this competition
(a better recommendation system)
is a great achievement by itself.
Does anybody know what (if any)
are more general benefits for machine learning/statistics/etc. which came out of this competition?
Did people come up with better algorithms/models/implementations/other insights which could be used for other problems?
Anonymous #3 — I haven’t seen anything about the final winning method, but what I understand from the previous progress prize report is that the biggest take-away from the Netflix competition is that a try everything, kitchen sink-style approach, combined with intelligent higher-level ensemble methods to combine the simpler models is very effective at squeezing the most out of the data.
I’m not aware of any super novel approaches that came out of the competition (that were effective, at least). Does anybody know otherwise?
Kevembuanggasays:
The echoing of Twitter spurts as comments appear a bit silly (I see no added value)
@Kevembuangga I have disabled the echoing of Twitter comments. It was always experimental. I admit that it does not seem useful.
michael papishsays:
Whether the very precise question asked by the Netflix Prize is directly relevant to improving the user recommendation experience is very interesting. MediaUnbound is doing a full series on the underlying issues and assumptions of the contest called Countdown to 10% here.
Daniel,
Don’t forget that according to the rules a 1 month challenge now begins 😉
@Nicholas Sure.
Congratulations to the winners.
I realize that the end product resulting
from this competition
(a better recommendation system)
is a great achievement by itself.
Does anybody know what (if any)
are more general benefits for machine learning/statistics/etc. which came out of this competition?
Did people come up with better algorithms/models/implementations/other insights which could be used for other problems?
Anonymous #3 — I haven’t seen anything about the final winning method, but what I understand from the previous progress prize report is that the biggest take-away from the Netflix competition is that a try everything, kitchen sink-style approach, combined with intelligent higher-level ensemble methods to combine the simpler models is very effective at squeezing the most out of the data.
I’m not aware of any super novel approaches that came out of the competition (that were effective, at least). Does anybody know otherwise?
The echoing of Twitter spurts as comments appear a bit silly (I see no added value)
@Kevembuangga I have disabled the echoing of Twitter comments. It was always experimental. I admit that it does not seem useful.
Whether the very precise question asked by the Netflix Prize is directly relevant to improving the user recommendation experience is very interesting. MediaUnbound is doing a full series on the underlying issues and assumptions of the contest called Countdown to 10% here.