The primary distinction between AGI and more "specialized" AI is the idea that the system should be able to solve problems which are not necessarily formally specified. And this is so important that I'm going to give an example.
You are a farmer taking his dog, a chicken and a sack of grain to market and you come across a river. The only way across the river is by a small boat, which can only hold at most you and one of the three items. Left unsupervised, the chicken will eat the grain or the dog will eat the chicken (however, the dog won't try to eat the grain, nor will the dog or the chicken wander off). What's the quickest way to get everything across the river?
It's a Standard Puzzle which I'm sure you've heard. There are existing AI systems that can solve problems like this, but they need the problem to be restated in a formal way, such as:
[[F S], [D S], [C S], [G S]] = START
[[F a], [D _], [C !a], [G !a]] = FAIL
[[F a], [D !a], [C !a], [G _]] = FAIL
[[F E], [D E], [C E], [G E]] = GOAL
!S = E
!E = S
move([[F a], [D a], [C _], [G _]], W) = [[F !a], [D !a], [C _], [G _]]
move([[F a], [D _], [C a], [G _]], C) = [[F !a], [D _], [C !a], [G _]]
move([[F a], [D _], [C _], [G a]], G) = [[F !a], [D _], [C _], [G !a]]
A formal problem like this can be driven from the START state to the GOAL state without hitting a FAIL state. This is one of those problems that can be easily formalized. There are many problems which cannot be - consider anything where the world changes dynamically, or where there is uncertainty about the current state of the world.
Solving a problem like this and more complicated problems requires sophisticated analysis, like that found in the Soar architecture, which is a kind of goal-directed reasoning. Essentially, to make progress towards a goal you need to consider what options are available and which of them are more likely to lead you to the goal. Achieving a goal may require the creation of sub-goals which can be achieved sequentially or in parallel.
In a formal system, the options are explicitly specified and may even have priorities associated with them that tell the system which are more likely to result in achieving the goal - or at least stop it from going around in circles trying things that don't work - but for an AGI the evaluation of the options available has to be done by experimentation or a statistical observation of previous attempts. For an AGI, the options may even be unspecified all together and have to be built from more primitive options until they are at the appropriate level of detail (and abstraction) to be able to be used to accurately predict how well they will improve the likelihood of achieving a goal.
And so we start to see how we might build an AGI. We need something labeled as the goal and a measurement of how "achieved" it is. We need sub-goals which offer guarantees about how much they will improve the achievement of their super-goals when they are achieved. We need some strategies for allocating processor and memory resources to goals. We need a system where goals can reallocate those resources to "tasks" which will enact options that have been predicted to improve the achievement of the goal.
I've been saying things like "predict" and "statistical observation" like they're no big thing, but these are probably the hardest parts of any intelligent system. There's a lot of different ways to do it, some better than others, and making an architecture that can support many of them, and choose between them for different problems, is a grand challenge.