Skip to content

on iPods and Netflix

Collaborative-Filter-SnipA couple of recent articles have got me thinking. This article from The New York Times talks about the issues involved in Netflix recent contest announcement, offering a million-dollar bounty to anyone that can improve the company’s movie recommendation system beyond a certain point. And this article in The Guardian talks about the problems inherent in using actual randomness in the iPod’s shuffle function — if you make something truly random, people will still project patterns onto the results.

The common thread is that both of these technologies are about having a machine make decisions that mimic and augment the human decision-making process, and this is a hard problem. People and machines (or at least, current technologies) approach decision-making in profoundly different ways. Computers, for the most part, have the luxury of being purely rational, or at least approaching pure rationality. Given a set of data and problem parameters, our algorithms will find or approximate the right answer. For human beings, though, our brains are designed to trick us into telling us we have the right answer, when in fact we have the solution to a related, but different problem. Ask a human (and here, as usual, I am specifically excluding statisticians) to give you a series of random numbers between one and ten, and they will rarely give you the same number twice in a row. Ask a random number generator, and doubles are common. Tell a human you like a Tom Waits album and ask for recommendations, and they might tell you about PJ Harvey or Buck 65. Ask a collaborative filter, and you’ll get every other Tom Waits album, even the ones that sound completely different or aren’t very good.

So what’s going on? I can certainly appreciate that computers and humans operate in different ways, and our computational machinery is wildly divergent. My question is why do we humans have this misplaced confidence in our solutions? Why do our random selections exhibit structure we are not aware of? Why do our recommendations contain so many hidden assumptions about sample distances and evaluation metrics? And why is it such a struggle to even become aware of our biases?

I have a few half-baked ideas, based on they way we evolved as a large, sociable, vision-oriented species, but it’s not an easy issue. However, it is one that I’m inclined to consider a key challenge to Artificial Intelligence. The last few years of AI have given us excellent information-sorting technologies, like Google’s search. I think the next step is to assist us, more directly but unobtrusively, in the decision-making process. Not to take decision-making away from us, but to augment our decision-making by allowing us to offload some of the inaccurate and tedious predictions we make in the process.

One Comment

  1. Ruben wrote:

    If we want a complete artificial intelligence, we need to create a kind of artificial “stupidness”.

    For example, toss a coin 10 times. If you get 10 heads, for the next toss you know that

    a) if you assume no prior information, the probabily of heads is 0.5

    b) if you use that prior information, the coin seems to be biased. So, the probability of heads is higher.

    Now, ask a human (excluding staticians) to bet for the next toss. Surprisingly, everyone say tails!!

    Tuesday, October 10, 2006 at 1:46 am | Permalink