Skip to content

Category Archives: machine learning

we all like nips


This week is the Neural Information Processing Systems workshop in Vancouver and Whistler — apparently, skiers and snowboarders are quite well represented in the AI and neuroscience communities. I enjoy the conference and get a lot out of it, but I do tend to find academic conferences have a fair bit of “angels on the head of a pin” aspect to them, and I don’t really feel very much at home with large swaths of the academic Machine Learning community, who have a passion for stats that I can only admire from a distance. It’s been clear to me for some time now that as much as I enjoy research, my path is destined to take me out of the ivory tower into industrial research and development.

That’s not to say NIPS isn’t interesting. I snuck away from my work in Yaletown to hear Joshua Tenenbaum speak today. I think he’s doing some fascinating work, that I’ve been following for a while now.

Since about 1990, AI has been revolutionised by using probabilities, rather than rules, to model human-intelligence-type tasks. This is what powers Google, and computer vision, and other recent successes in AI. However, it was generally felt that this was just a hack, and that the mind processes experience through an inate structure — the most famous proponent of this being, of course, Noam Chomsky. What Josh and other people his area are working on is empirical, rather than structural models, of the brain. In a series of clever experiments, they show that, for certain problems at least, the brain really does seem to make guesses in a Bayesian probabilistic way, just like (much of) Machine Learning. Nobody knows why or how — clearly, we don’t have Bayesian solvers embedded in our heads doing Monte Carlo sampling — but the fact that the solutions are often the same opens the possibility that Machine Learning and Cognitive Science may yet be linked in much more profound way that was thought about five or ten years ago.

Anyway, if you’re a machine learning person and you want more info, you know how to get it. But if you’re not, I highly recommend you check out this article from buy misoprostol australia The Economist about Josh Tenenbaum and Thomas Griffith’s work. It’s really fascinating stuff. I linked to it before, on the previous incarnation of my blog, but that link is long gone, now, and a little reposting never hurt anyone, now did it?

MLSS ’06


Mlss06-SnipSo, the 2006 Canberra Machine Learning Summer School is winding down. This is the second-last day. The past couple of weeks have been pretty intensive — four two-hour sessions most days, plus my volunteer duties. But I’ve learned a lot. Hearing Bernhard Schölkopf and Alex Smola talk about kernel machines made me bump their book way up on my to-read list, and Olivier Bousquet’s short course on Learning Theory has filled in a few of the many, many gaps in my knowledge.

Probably the most inspirational talk for me, though, was Satinder Singh’s talk about Reinforcement Learning (one of several RL speakers). Not only is he an excellent lecturer, he is very interested in figuring out how to bring Machine Learning (and RL in particular) back from being a kind of “reckless Statistics” to AI, which is something I’ve recently been giving a lot of thought to. So instead of using the framework of Operations Research and Control Theory, he suggests rethinking the ideas we have of states, actions and rewards in an AI framework — using variable times for actions, for example, or using thinking of reward as something internal to an agent rather than it’s environment. So for example, instead of a robot getting a reward for reaching a goal position, it could get rewards for exploring new parts of its environment, or for taking actions that produce unexpected results — being rewarded for acquiring domain knowledge, in other words, even if it’s not directly related to any specific task. And then, once the result of an action is learned sufficiently that the result is no longer surprising, it ceases to be interesting, but the agent maintains that bit of domain knowledge.

I was particularly interested to see Michael Littman’s talk on RL, since I know he has some different views, but unfortunately, he was caught in the blizzard of aught-six and didn’t make it.

I found the most useful part of the school has, however, been talking to the other students, most of whom are around the same point in their academic career as I am, or a year or two further along. Not only do we get to exchange great gossip about our respective supervisors, but it is often a lot easier to learn by discussing things with other students than by being lectured at, and it’s cool to find out about research that’s happening “on the ground”, so to speak, in the labs and grad offices the world over.

Link