Artificial General Intelligence

For a period in 2008 I was working on the OpenCog project. I left the project due to the lack of engineering competency and leadership displayed by the group (not to mention a few personality clashes), but I still think the work of Dr Ben Goertzel, that we were trying to implement, is fundamentally sound. I do, however, wonder how many people actually get what it is all about. There's so many buzzwords and excessive focus on statistics and logic and all those messy details that I think the underlying simplicity of the mechanism isn't being communicated. So here's my attempt.

The primary distinction between AGI and more "specialized" AI is the idea that the system should be able to solve problems which are not necessarily formally specified. And this is so important that I'm going to give an example.

You are a farmer taking his dog, a chicken and a sack of grain to market and you come across a river. The only way across the river is by a small boat, which can only hold at most you and one of the three items. Left unsupervised, the chicken will eat the grain or the dog will eat the chicken (however, the dog won't try to eat the grain, nor will the dog or the chicken wander off). What's the quickest way to get everything across the river?

It's a Standard Puzzle which I'm sure you've heard. There are existing AI systems that can solve problems like this, but they need the problem to be restated in a formal way, such as:


[[F S], [D S], [C S], [G S]] = START
[[F a], [D _], [C !a], [G !a]] = FAIL
[[F a], [D !a], [C !a], [G _]] = FAIL
[[F E], [D E], [C E], [G E]] = GOAL
!S = E
!E = S
move([[F a], [D a], [C _], [G _]], W) = [[F !a], [D !a], [C _], [G _]]
move([[F a], [D _], [C a], [G _]], C) = [[F !a], [D _], [C !a], [G _]]
move([[F a], [D _], [C _], [G a]], G) = [[F !a], [D _], [C _], [G !a]]


A formal problem like this can be driven from the START state to the GOAL state without hitting a FAIL state. This is one of those problems that can be easily formalized. There are many problems which cannot be - consider anything where the world changes dynamically, or where there is uncertainty about the current state of the world.

Solving a problem like this and more complicated problems requires sophisticated analysis, like that found in the Soar architecture, which is a kind of goal-directed reasoning. Essentially, to make progress towards a goal you need to consider what options are available and which of them are more likely to lead you to the goal. Achieving a goal may require the creation of sub-goals which can be achieved sequentially or in parallel.

In a formal system, the options are explicitly specified and may even have priorities associated with them that tell the system which are more likely to result in achieving the goal - or at least stop it from going around in circles trying things that don't work - but for an AGI the evaluation of the options available has to be done by experimentation or a statistical observation of previous attempts. For an AGI, the options may even be unspecified all together and have to be built from more primitive options until they are at the appropriate level of detail (and abstraction) to be able to be used to accurately predict how well they will improve the likelihood of achieving a goal.

And so we start to see how we might build an AGI. We need something labeled as the goal and a measurement of how "achieved" it is. We need sub-goals which offer guarantees about how much they will improve the achievement of their super-goals when they are achieved. We need some strategies for allocating processor and memory resources to goals. We need a system where goals can reallocate those resources to "tasks" which will enact options that have been predicted to improve the achievement of the goal.

I've been saying things like "predict" and "statistical observation" like they're no big thing, but these are probably the hardest parts of any intelligent system. There's a lot of different ways to do it, some better than others, and making an architecture that can support many of them, and choose between them for different problems, is a grand challenge.

Comments

  1. A year ago I started writing Artificial Intelligence 101, and wrote two parts of a six part series. I really should finish that series. After that I'll start writing AI102, where I present FUNGAL, a general purpose AI language for microcontrollers.

    http://robot_guy.blogspot.com/2009/03/artificial-intelligence-101.html

    http://robot_guy.blogspot.com/2009/03/artificial-intelligence-101-part-2.html

    ReplyDelete
  2. Sorry to hear OpenCog isn't getting anywhere. I heard about it a few years ago and hoped it could make some progress. Course a lot of AGI seems to never really get somewhere.

    Embarrassing - we build computers with vastly more computational and data capacities and scale then a human brain -- but the AI minds aren't getting there.

    ReplyDelete
  3. @Kellyst - it's not completely true we're not getting anywhere, but progress is slow due to having few resources to work with. That will hopefully change in a few months though.

    Ben is a strong leader in terms of theory of mind, but there was a diverse group of people working on the software without a clear engineering lead. We each have our own ideas about software engineering principles, and the way forward (and we were each mostly working on separate components). I hope to step up to this role in the coming months however.

    Thanks QuantumG for explanation on AGI. Nicely done :-)

    ReplyDelete
  4. Joel, finally? You were saying that when I was working on the project ;)

    ReplyDelete

Post a Comment

Popular posts from this blog

Disabling OS-X Device Removal Warnings In Yosemite

Rebirth Of The Spaceship

Living Inside An Asteroid