So, the 2006 Canberra Machine Learning Summer School is winding down. This is the second-last day. The past couple of weeks have been pretty intensive — four two-hour sessions most days, plus my volunteer duties. But I’ve learned a lot. Hearing Bernhard Schölkopf and Alex Smola talk about kernel machines made me bump their book way up on my to-read list, and Olivier Bousquet’s short course on Learning Theory has filled in a few of the many, many gaps in my knowledge.
Probably the most inspirational talk for me, though, was Satinder Singh’s talk about Reinforcement Learning (one of several RL speakers). Not only is he an excellent lecturer, he is very interested in figuring out how to bring Machine Learning (and RL in particular) back from being a kind of “reckless Statistics” to AI, which is something I’ve recently been giving a lot of thought to. So instead of using the framework of Operations Research and Control Theory, he suggests rethinking the ideas we have of states, actions and rewards in an AI framework — using variable times for actions, for example, or using thinking of reward as something internal to an agent rather than it’s environment. So for example, instead of a robot getting a reward for reaching a goal position, it could get rewards for exploring new parts of its environment, or for taking actions that produce unexpected results — being rewarded for acquiring domain knowledge, in other words, even if it’s not directly related to any specific task. And then, once the result of an action is learned sufficiently that the result is no longer surprising, it ceases to be interesting, but the agent maintains that bit of domain knowledge.
I was particularly interested to see Michael Littman’s talk on RL, since I know he has some different views, but unfortunately, he was caught in the blizzard of aught-six and didn’t make it.
I found the most useful part of the school has, however, been talking to the other students, most of whom are around the same point in their academic career as I am, or a year or two further along. Not only do we get to exchange great gossip about our respective supervisors, but it is often a lot easier to learn by discussing things with other students than by being lectured at, and it’s cool to find out about research that’s happening “on the ground”, so to speak, in the labs and grad offices the world over.
Link