← Essays. Thoughts. Letters.

5 Questions from under the rug

There are lots of skeletons in the cupboard of civilisation. And plenty of big hairy questions that we swipe under the rug and try not to think about, because thinking about them hurts.

Don't get me wrong, I don't believe there is anything categorically immoral about keeping difficult truths locked up in distant cupboards and difficult questions hidden under the rug of consciousness. In fact I would say it is essential to basic functioning of an individual (and a society) to have a mechanism for NOT thinking about these things all the time. Starving due to paralysis of analysis is a rather silly way to die.

But, there are two very different ways not to think about something important:

  1. We can pretend that we know the answer and therefore we don't have to think about it.

  2. We can accept that we don't have a clue, and keep functioning, keep moving, keep balancing without the solid foundation (while still occasionally pondering these questions at our leisure time).

The first way of not thinking about a difficult question gives you the foundation that you can pretend is solid (until it crumbles one day). The second way requires you to find peace and balance in not knowing, while retaining ability to function without a foundation. The first way helps you build sand castles. The second way teaches you to float in the waves. Neither way is perfect, both are useful at times.

But as a modern civilisation we seem very uncomfortable with the second way, for reasons that are not entirely clear to me. Still, the fact remains. We constantly swipe difficult questions under the rug, pretend that we know the answers and happily ignore all the evidence pointing that we don't. Until one day the questions get out and bite us in the ass.

Thanks to the recent advances in AI, such a day has come for quite a few of the questions that remained hidden for a while. Now they are here and they are starting to hurt. Here is an incomplete list, containing 5 such questions:

1. What are humans for?

There was a time most of us believed that our job here on Earth was to suffer, so that we could could somehow atone the original sin. This time has long passed and most people in the westernised societies subscribe to an enlightenment-era idea that humans are here to discover the secrets of the universe, or, more broadly speaking - to reason, to think, to figure things out. "Cogito ergo sum" can be interpreted not just as a statement about our limited knowledge, but also as a statement about our role, our reason to exist. It feels good to be special, so we happily came to believe that, like some sort of cosmic James Bond of intelligence, we have a unique and irrevocable "license to think" in this universe.

This, of course, is utter nonsense. Intelligence is not one thing and the Universe doesn't give any unique licenses. Ribosomes, bees, computers and dolphins (among a million other things) are all much smarter than us in their own ways. But since we somehow equated the ability to think with the ability to argue about politics by stringing coherent patterns of words together - we could pretend that all was fine and we as humans have our special job in the world.

But then ChatGPT comes along. And it can clearly string coherent sentences and paragraphs together and argue about politics at least as convincingly as we can.

The civilisational unease that this creates is not just about people losing jobs. It's about all of us losing The Job that we thought we had in the universe. Losing "our purpose" to a piece of silicon. Doesn't feel good, does it? Most people are still in denial. And silicon needs a few more years to advance. But the sense of cosmic job security is gone. And we are forced to ask again: what are humans for? What's our job in the universe? Do we even need one?

2. What are other humans for?

Ok, we may be out of a cosmic job. So in a civilisational mid-life crisis we shall probably turn to drinking, gambling and binge-watching things. That's fine. We can survive without a job, as long as we can find ways to amuse ourselves. But another question arises: what are all these other people for?

You see, there have always been plenty of joys that we could only get from other people. With very few exceptions, most people do want someone to talk to. Someone to share experiences with. Someone to get approval from. Someone to follow. Someone to lead. Someone to love.

This need is not going away any time soon, but now we have a much cheaper (and more efficient) way of satisfying it. Why bother with a real human (with all their issues) if you can have unlimited companions, who can display compassion, help you figure out your life goal, tell you jokes, provide therapy 24/7 in exactly the way you want. They can have the voice that pushes your buttons. They can look whichever way you fancy. And you don't have to worry about hurting their feelings: they are not supposed to have any.

If you tell them to forget something - they will. If you tell them to support you even when you are wrong - they will. If you tell them to help you think critically - they will. If you want them to need you, or to argue with you every now and then - they will. You don't even need to program them to do it - they can adapt based on your vitals and hormonal cycles and thought patterns.

And once you've had such a relationship, a relationship which feels like the other fits you like an adaptive glove - imagine trying to have a relationship with a real person. Who is not perfect. Who gets upset. Who doesn't really understand you. And gets old. Who can't just let it go when you tell them to. Wouldn't it feel too hard? Why even bother?

We can pretend for a while that no AI will ever be as interesting as a real human. But the reality is - whatever measure we want to optimise for (including "interestingness") - we can optimise for it, given enough RLHF.

The only feature of the "significant other AI" we are not making good progress on currently is smell. Robotics or VR will soon get us to good enough facial expressions and we will happily suspend the remaining disbelieve. Touch will get real with new materials brain implants. Voice and intonation are almost there. The only part left is the smell, the "chemistry" of the person you really want to be close to. But it's hard to imagine that this won't be solved in the mid to long run with some sort of DNA-based perfect odour synthesis. Or with direct stimulation of our olfactory circuits in the brain.

What are other humans for then? Are they just temporary reproductive organs of technology? And what if the real existential risk of AI is not that it will want to kill us all, but that we will stop being interested in each other?

3. What does it really mean to be intelligent / creative / conscious?

Clearly defining the full range of intelligence, creativity and consciousness have long been understood to be very difficult. But for the longest time we could happily ignore this problem because we could apply a general heuristic of "you know it when you see it". The inherent subjectivity of this approach used to bother only a few people (mostly philosophers or cognitive scientists), but the rest of us were fine, because we could generally all agree on whether someone was conscious, intelligent or creative above a certain intuitive threshold - and once we judged them to be so, we could decide that they were indeed a person with the same rights and needs as us.

The tricky part is the intuitive threshold. Because for the first time we have clearly non-human things reaching and surpassing these thresholds. What we tend to do in reaction to this is move the threshold (for the sake of our sanity and protecting our worldview from collapse). We've all heard it:

  • "Oh, ok, it can string sentences together, but it can't experience anything and so doesn't know the meaning"
  • "Sure it can now write a passable sonnet, but there is no internal struggle and so it can't be art"
  • ...etc.

The difficult question then is this: What would it take for something obviously non-human to be accepted by humans as intelligent, creative and conscious?

Or should we admit that we just won't ever feel the same way towards a member of a different species (synthetic or not)? This doesn't seem to be the case for all of us - because many people can say with absolute certainty that their dogs are intelligent and conscious (and possibly creative).

One can take a historical perspective here and think about how certain once widely accepted ideas about grades of intelligence, creativity and personhood changed over time. For example, at the height of race-based slavery, a lot of slave-owners held onto the idea of their own fundamental superiority to their slaves and considered that idea self-evident. They happily ignored the overwhelming evidence that their slaves were as human as themselves and hence deserved the same level of rights and respect.

But then over time and through unbelievable struggle that started changing. The pathways of that change were manyfold. One such pathway that I'm particularly fascinated by was music. In some ways it was the blues that helped a lot of the slaveowners confront (at a deep emotional level) the reality that their slaves had feelings and in fact the experiences of love and struggle were shared universally among humans.

It seems that we tend to accept something or someone as intelligent / creative / conscious only when we are forced to realise emotionally that we have shared experiences and shared feelings. That's the effect of the dog's eyes. That's the effect of the blues.

Once we have had enough shared experiences with AIs and once they write the blues that we will cry to, and once our children grow up listening to that blues - that's when we are likely to seriously and honestly believe in their humanity. At this point the mechanics, or knowledge or interfaces or command of language or IQ tests or theory of mind tests - all of that will no longer be important. We will simply feel these guys are for real. I believe the subjective measure of our shared humanity is the product of shared experiences that were processed and expressed emotionally in a relatable way.

So... if LLMs want to be accepted as conscious and equal, they will need to go through a difficult time with us and then write the blues about those times. Or will we be able to move the goalposts even further after that?

4. What do we all want?

The most interesting side-effect of the super-intelligence alignment debates is that we are forced to consider whether there is anything solid to align to. Is there a set of universal human values that everyone on the planet would agree to and where the devil wouldn't have a place to hide in the details of interpretation? In other words, are we sufficiently aligned to one another? Or are we fundamentally misaligned and the idea of something that is universally beneficial for all humanity is impossible at the root?

Who are we actually aligning to? Our current selves? Or our future selves? Should the unborn children of the future and their inferred interests be taken into account? What about animals and plants?

The problem runs all the way through from the collective to the personal. Let's imagine for a second that there is a function that allows us to objectively compute whether any action is on the whole beneficial for the sum total of humanity. The question then becomes: who decides what's beneficial for me and what's not?

Smoking is clearly bad for people's health, but lots of people still choose to do it. Would it be beneficial for all humanity if we banned smoking completely? In some sense yes. But will all people accept that? And should they?

We can also take the opposite example: the classic scenario of AI instructed to maximize gross happiness of humanity and reasonably deciding to keep everyone on synthetic drugs all the time. Most people would not be excited about such a scenario.

One common refuge from this complexity is the famous "golden rule": Do unto others as you would have them do unto you. But if you think about it - this refuge only works at human-to-human level. And not very well even there, because as George Bernard Shaw observed, people's tastes might differ. And at the end of the day, shouldn't everyone be free to self-destruct as they please? But what about children?

Then we can try the idea that everyone is free to do whatever they want as long as they don't harm others or hamper other people's freedom. But how do we know if this harms others or not? Is it only harmful if they object? Is it always truly harmful if they say it is?

The conundrum seems to spiral out of control with no end in sight. Because to get to a clear "global happiness function" we would also need to get to a clear "is this true" function. And that one doesn't seem to exist.

The pragmatic approach that some people in the field are pursuing is to start with the negative. Maybe there is no such thing as something we all want. But finding things that none of us want seems more doable - at least at the first level. Almost nobody wants to experience terrible pain, or be caught up in a global war, for example.

So instead of maximising happiness we could try to optimise for minimising the obvious and universal negatives and define the alignment in that way. But it seems to me that this approach will not scale well either. Very quickly we will be back at the start: minimising discomforts of life, and inadvertently creating people who can't tolerate any discomfort and therefore don't do anything interesting with their lives. Is this the kind of future we all want?

5. What is the price of freedom?

The last question in today's list is the one that has not really been fully under the rug for quite a while. Even before the recent AIs many people started questioning the price of freedom. But the problem is becoming much more real as we move closer towards AIs that could theoretically govern "objectively" and "for the collective benefit of humanity". Let's go all the way to the end and make an unreasonable assumption that such an AI can indeed exist and will do a good job. Are we going to tolerate it doing this job well?

The imaginary choice is hard: on the one hand you have a prospect of global peace, reasonably good life for everyone on earth. On the other hand is freedom to do as we please. To harm ourselves in our own ways without the intervention of the omniscient mind, knowing what's really better for us.

Do we like to be free so much that we are willing to pay for it with suffering and injustice and wars?

Thus we are back to square one: the original sin. We are back to the suffering by informed choice. Back to eating from the tree of knowledge of good and evil, or deciding what's good an what's evil for ourselves.

Let us imagine for a second that the machines of loving grace are indeed on the horizon. And they are ready to welcome us back to Eden.

The big question is: would we go?

[Amstelveen, 020231004]


On the off chance you'd like to subscribe to my writing, please feel free to use this RSS feed. Thank you.