Skip to content

Category Archives: artificial intelligence

we all like nips


This week is the Neural Information Processing Systems workshop in Vancouver and Whistler — apparently, skiers and snowboarders are quite well represented in the AI and neuroscience communities. I enjoy the conference and get a lot out of it, but I do tend to find academic conferences have a fair bit of “angels on the head of a pin” aspect to them, and I don’t really feel very much at home with large swaths of the academic Machine Learning community, who have a passion for stats that I can only admire from a distance. It’s been clear to me for some time now that as much as I enjoy research, my path is destined to take me out of the ivory tower into industrial research and development.

That’s not to say NIPS isn’t interesting. I snuck away from my work in Yaletown to hear Joshua Tenenbaum speak today. I think he’s doing some fascinating work, that I’ve been following for a while now.

Since about 1990, AI has been revolutionised by using probabilities, rather than rules, to model human-intelligence-type tasks. This is what powers Google, and computer vision, and other recent successes in AI. However, it was generally felt that this was just a hack, and that the mind processes experience through an inate structure — the most famous proponent of this being, of course, Noam Chomsky. What Josh and other people his area are working on is empirical, rather than structural models, of the brain. In a series of clever experiments, they show that, for certain problems at least, the brain really does seem to make guesses in a Bayesian probabilistic way, just like (much of) Machine Learning. Nobody knows why or how — clearly, we don’t have Bayesian solvers embedded in our heads doing Monte Carlo sampling — but the fact that the solutions are often the same opens the possibility that Machine Learning and Cognitive Science may yet be linked in much more profound way that was thought about five or ten years ago.

Anyway, if you’re a machine learning person and you want more info, you know how to get it. But if you’re not, I highly recommend you check out this article from http://city-made.com/dscn3147/ The Economist about Josh Tenenbaum and Thomas Griffith’s work. It’s really fascinating stuff. I linked to it before, on the previous incarnation of my blog, but that link is long gone, now, and a little reposting never hurt anyone, now did it?

on iPods and Netflix


Collaborative-Filter-SnipA couple of recent articles have got me thinking. This article from The New York Times talks about the issues involved in Netflix recent contest announcement, offering a million-dollar bounty to anyone that can improve the company’s movie recommendation system beyond a certain point. And this article in The Guardian talks about the problems inherent in using actual randomness in the iPod’s shuffle function — if you make something truly random, people will still project patterns onto the results.

The common thread is that both of these technologies are about having a machine make decisions that mimic and augment the human decision-making process, and this is a hard problem. People and machines (or at least, current technologies) approach decision-making in profoundly different ways. Computers, for the most part, have the luxury of being purely rational, or at least approaching pure rationality. Given a set of data and problem parameters, our algorithms will find or approximate the right answer. For human beings, though, our brains are designed to trick us into telling us we have the right answer, when in fact we have the solution to a related, but different problem. Ask a human (and here, as usual, I am specifically excluding statisticians) to give you a series of random numbers between one and ten, and they will rarely give you the same number twice in a row. Ask a random number generator, and doubles are common. Tell a human you like a Tom Waits album and ask for recommendations, and they might tell you about PJ Harvey or Buck 65. Ask a collaborative filter, and you’ll get every other Tom Waits album, even the ones that sound completely different or aren’t very good.

So what’s going on? I can certainly appreciate that computers and humans operate in different ways, and our computational machinery is wildly divergent. My question is why do we humans have this misplaced confidence in our solutions? Why do our random selections exhibit structure we are not aware of? Why do our recommendations contain so many hidden assumptions about sample distances and evaluation metrics? And why is it such a struggle to even become aware of our biases?

I have a few half-baked ideas, based on they way we evolved as a large, sociable, vision-oriented species, but it’s not an easy issue. However, it is one that I’m inclined to consider a key challenge to Artificial Intelligence. The last few years of AI have given us excellent information-sorting technologies, like Google’s search. I think the next step is to assist us, more directly but unobtrusively, in the decision-making process. Not to take decision-making away from us, but to augment our decision-making by allowing us to offload some of the inaccurate and tedious predictions we make in the process.

MLSS ’06


Mlss06-SnipSo, the 2006 Canberra Machine Learning Summer School is winding down. This is the second-last day. The past couple of weeks have been pretty intensive — four two-hour sessions most days, plus my volunteer duties. But I’ve learned a lot. Hearing Bernhard Schölkopf and Alex Smola talk about kernel machines made me bump their book way up on my to-read list, and Olivier Bousquet’s short course on Learning Theory has filled in a few of the many, many gaps in my knowledge.

Probably the most inspirational talk for me, though, was Satinder Singh’s talk about Reinforcement Learning (one of several RL speakers). Not only is he an excellent lecturer, he is very interested in figuring out how to bring Machine Learning (and RL in particular) back from being a kind of “reckless Statistics” to AI, which is something I’ve recently been giving a lot of thought to. So instead of using the framework of Operations Research and Control Theory, he suggests rethinking the ideas we have of states, actions and rewards in an AI framework — using variable times for actions, for example, or using thinking of reward as something internal to an agent rather than it’s environment. So for example, instead of a robot getting a reward for reaching a goal position, it could get rewards for exploring new parts of its environment, or for taking actions that produce unexpected results — being rewarded for acquiring domain knowledge, in other words, even if it’s not directly related to any specific task. And then, once the result of an action is learned sufficiently that the result is no longer surprising, it ceases to be interesting, but the agent maintains that bit of domain knowledge.

I was particularly interested to see Michael Littman’s talk on RL, since I know he has some different views, but unfortunately, he was caught in the blizzard of aught-six and didn’t make it.

I found the most useful part of the school has, however, been talking to the other students, most of whom are around the same point in their academic career as I am, or a year or two further along. Not only do we get to exchange great gossip about our respective supervisors, but it is often a lot easier to learn by discussing things with other students than by being lectured at, and it’s cool to find out about research that’s happening “on the ground”, so to speak, in the labs and grad offices the world over.

Link