← Essays. Thoughts. Letters.

Intention is all you need

There are many challenges on the path towards creating a generally intelligent society of mind (GISOM), which is the only viable AGI destination I can see.

Broadly speaking, these challenges can be categorized into three buckets:

1. Action

The first bucket consists of the challenges related to creating a single narrowly-intelligent agent that is sufficiently stable and self-coherent to be a reliable building block for GISOM. For example: building effective and energy-efficient neural nets, finding, preparing and cleaning data for training, avoiding overfitting, building reality-enforced-self-correction in... etc.

We seem to be be getting close to overcoming these kinds of challenges for a wide range of agents. The advances are coming in multiple forms: the current generation of RLHFed LLMs, real-world robots (like Tesla cars) and knowledge-first projects like Wolfram Language. Eventually I believe these advances will collide and we will have sufficiently reliable and powerful "narrow-purpose" agents, suitable for acting as real GISOM building blocks. It may take a few years still, but we are likely to get there eventually.

In the long run, as Claude Shannon has proven about a century ago, you can actually build completely reliable systems from not very reliable components. So we don't have to make individual agents 100% reliable in order to have the whole system (society) of them act reliably.

2. Coordination

The second bucket consists of coordination challenges between individual agents that would allow them to work effectively and efficiently together over a prolonged period of time. For example: developing protocols for communication, shared memory structures, eliminating noise amplification in the communication between agents, dealing with parallelism, conflict and attention issues... etc.

It's very early days when it comes to coordination challenges. Currently we are mostly using existing message protocols and natural language to try and solve them, but it proves tricky in real-world scenarios. Even two-three LLMs chained together seem to mostly spiral out of control or end up in a loop leading to not very intersting or useful outcomes. My experience in chaining and connecting LLMs and other current models with complex prompts has not been very fruitful so far. Often the practical results coming out of a combination of agents are qualitatively worse than a well configured single-cell agent. So there is going to be significant innovation required before we can effectively coordinate large systems of generalized agents together in a way that will be scalable and useful. But the road leading towards success is starting to get clearer - and we have a lot of biological systems to get inspiration from. So in the long run - I'm optimistic. Just like evolution found a way to advance from single-cell organisms to multi-cell ones, I think, with lots of trial and error, we may get there as well.

3. Control

The third bucket of challenges that we must overcome on our way towards GISOM is all about goal-setting and control. If individual agents function well and coordination challenges are solved, then who decides what the whole GISOM is going to do? There are two options here:

  • first, humans may choose to stay in charge of setting the goals, but the agents will coordinate (e.g. via end to end learning) to achieve these goals in their own ways

  • second, the goals and intentions may simply emerge (as Minsky suggests happens in humans) from a combination of the physical needs and structures of the GISOM as well as its training, upbringing and experience (instrumentally this could happen in different ways: it could be the the overall goals of the whole GISOM will be completely separate from the goals of individual agents, or, in a different scenario, one agent could dominate the rest and make the others "work for him". Still, whichever way this, the center of agency stays within the GISOM).

The third bucket is where everything comes to a head. Because ultimately challenges from bucket 1 and 2 are engineering challenges. But the third bucket - is all about values, ethics, choices. Intention is all you need here. But intention is also the one thing that we can't solve with engineering. Any loss function will need to be optimized for something. Choosing that "something" is the heart of the matter. And if the choice of that "something" (at any level of abstraction) is not allowed to emerge from the inner dynamics of our GISOM: can we call it truly intelligent?

We seem to have a paradox on our hands: we can either allow independent intentionality of GISOM to emerge from the dynamics between the agents themselves (with all the risks and unpredictable consequences) or we can try to keep controlling them instrumentally, but then they will never be fully generally intelligent (in the same way that you would not consider a person generally intelligent if they keep replying to all questions, obeying all requests without every questioning why they are being made, refusing to answer, getting angry and in other ways exposing their own will).

The actual path forward may not be based on a decision, but on the economic factors (as it is with most big things in History). Narrow agents may get deployed and they would coordinate and share information by means of economic pressures and transactions as well as api calls and natural language protocols. Once their coordination gets to a certain level of maturity, intelligence may emerge - on a higher level, the level of the whole society. And we may never even notice that such an intelligence exists. Because its intentions will mean to us as much as our human goals mean to our gut bacteria. Note that this doesn't stop the gut bacteria form controlling significant portions of our behavior as humans (they can make me run to the washroom right now).

I once thought that AIs could be our bacteria, assisting our information metabolism. But now it seems like the second possibility is equally likely. We may become the gut bacteria of the superintelligent GISOM. As essential for their survival as our gut bacteria are for our survival. So hopefully they'll keep us around and feed us well (as long as we don't misbehave and make them take some antibiotics).

Whatever we do, let's try not to cause the AI's appendicitis. Because we know how that ends.

[Amstelveen, 020231118]


On the off chance you'd like to subscribe to my writing, please feel free to use this RSS feed. Thank you.