This essay is telescopic. It can shrink or expand, depending on how much attention you are willing to give.
This is an AI-generated version (1770 words, not yet human-reviewed). Show other sizes.

The Ladder of Artificial Agency

"Civilization advances by extending the number of important operations which we can perform without thinking about them"
Alfred Whitehead

The relationship between human and artificial agency stands at a pivotal moment in history. As I collaborate with an AI assistant to craft these words, we're engaging in precisely the kind of agency-sharing that defines our technological era. For 150,000 years, humans have considered ourselves the exclusive wielders of rational, deliberate agency in our world - or at least that's the story we told ourselves. While other forms of life have always exercised their own kinds of agency, from bacteria navigating chemical gradients to birds crafting elaborate nests, we maintained that our particular flavor of consciousness and decision-making was unique and superior.

Now, as we build artificial minds capable of increasingly autonomous action, we face profound questions about the nature of agency itself. The most pressing question isn't simply how intelligent these systems are, but rather what kind of agents we want them to be and how we want to share agency with them. This represents a fundamental shift in how we think about artificial intelligence - moving beyond capabilities to consider the deeper implications of how agency is distributed between human and machine.

To understand the different modes of artificial agency – the various ways that intelligence manifests as action in our world – we can imagine them arranged on a ladder of increasing autonomy and agency transfer. This isn't an evolutionary ladder (these modes coexist and always will), but rather a framework for understanding how agency flows between human and artificial systems in different contexts.

The Foundation: Technology as Infrastructure

At the bottom rung of our ladder, artificial intelligence operates as pure technology – shaping our world from beneath the surface of awareness, like electricity running through the walls of your house. Here, the human retains almost complete agency (or at least the illusion of it), while the technology amplifies their capabilities tremendously.

Consider autonomous trading algorithms following the overall guidance of human traders. These systems adjust market positions microsecond by microsecond, but they're not really making the important choices – they're expressing the crystallized intentions of their human operators, magnifying their will to act at speeds inaccessible to biological brains. The technology serves as an infrastructure layer, enhancing human agency without claiming much of its own.

However, even at this foundational level, the distribution of agency isn't always clear. When a collection of trading algorithms creates an unexpected market flash crash, whose agency was really expressed? The human traders who set the parameters? The developers who wrote the algorithms? The emergent behavior of the system itself? These questions preview the increasingly complex agency relationships we'll encounter as we climb the ladder.

Tools and Their Limitations

One step up, we encounter AI as a genuine tool – and here we must be precise, because "tool" is perhaps the most misused word in discussions about AI. A hammer is a tool because its agency is nearly zero; it does exactly what the human wielder intends (assuming proper skill), imposing only physical constraints. Most of what we casually call "AI tools" today aren't really tools in this pure sense – they have too much agency of their own.

Consider the difference between a pencil and an AI image generation system. The pencil is a true tool, translating human intention directly into marks on paper. The AI system, however, makes countless decisions about composition, style, and interpretation that weren't explicitly specified by the human user. A spell-checker is a tool. An AI writing assistant that can "continue in the style of Hemingway" is something else entirely.

The boundary between tool and agent becomes clearer when things go wrong. If a hammer breaks while you're using it, you don't wonder about its intentions or motivations. But when an AI writing assistant produces unexpected output, we often find ourselves questioning its "understanding" or "goals." This anthropomorphization reveals that we're dealing with something more than a simple tool.

The Cognitive Prosthetic

The third level is where agency begins to truly blur: AI as a cognitive prosthetic that doesn't just extend our capabilities but shapes how we think and remember. This represents a qualitative shift in the human-AI relationship, as the technology becomes inseparable from our cognitive processes.

When researchers use AI systems to explore connections across thousands of papers, or when writers use AI to help structure their thoughts, the resulting insights aren't purely human anymore – they emerge from a hybrid dance of agencies. Like our temporal lobe, these systems gradually become an integral part of our extended mind, influencing not just what we can do but how we think about what we can do.

This integration raises profound questions about the nature of human cognition and agency. If an AI system helps shape our thoughts and memories, who is really doing the thinking? Where does human agency end and artificial agency begin? The boundaries become increasingly fuzzy as these systems become more sophisticated and more deeply integrated into our cognitive processes.

The Collaborative Agent

At the fourth level, AI emerges as a distinct agent with its own domain of competence within a primarily human process. This is the "team member" level, where AI systems operate like that brilliant but slightly odd colleague who nobody quite understands but everyone relies on. This is where much of current AI development sits – systems that can engage in meaningful collaboration while remaining part of a human-led process.

This essay itself exemplifies this level of agency-sharing, being crafted through collaboration between a human author and an AI assistant. The AI brings its own perspective and capabilities to the table, while the human maintains overall creative direction. This creates a dynamic interplay of agencies, where both parties contribute their unique strengths to the final product.

Autonomous Teams

The fifth level represents a significant leap in artificial agency: AI systems working together as autonomous teams within human organizations. Imagine a corporate research department where AI agents collectively handle literature review, experimental design, data analysis, and paper drafting – while humans focus on asking the right questions and making strategic decisions.

This level introduces new complexities in agency distribution. The artificial team has significant autonomy in how work gets done, but the fundamental direction still comes from humans. This creates interesting dynamics around responsibility and control: How do we ensure accountability when decisions emerge from the collective agency of multiple AI systems? How do we maintain meaningful human oversight without micromanaging the artificial team?

The Ultimate Delegation

At the top of our ladder sits the most provocative form: AI as an autonomous trustee making consequential real-world decisions without detailed human oversight. Here, humans have delegated significant agency to the artificial system, creating fascinating new dynamics of control and influence.

Consider an autonomous enterprise where humans are shareholders and board members, but both the CEO and all workers are AI agents. Or imagine AI assistants making and executing purchase decisions on behalf of humans. This creates a fascinating duality – the system becomes both a trustee (the user surrenders their agency to it) and a target audience (when sellers attempt to influence purchasing decisions, they must now influence the AI rather than the human).

The Design Challenge

The ladder of artificial agency gives us six distinct levels of agency distribution between humans and AIs: technology (1), tool (2), temporal lobe (3), team member (4), team (5), and trustee/target (6). Like any model, it's a crude simplification of a much more fluid reality. Agency rarely fits into neat categories – it flows and shifts, often occupying multiple levels simultaneously.

The fundamental question isn't just what level of artificial agency we want our AIs to exhibit – it's what we as humans want to keep thinking about and deciding for ourselves, and what we're willing to delegate and eventually forget. Because delegation almost invariably brings the risk of atrophy. Skills we don't use wither. Understanding we don't maintain fades. Agency we don't exercise diminishes.

This presents us with crucial design challenges. Each level of artificial agency creates different affordances: different possibilities for human engagement and different paths for system evolution. When we let AI operate as pure technology, we gain efficiency but risk losing touch with the underlying processes. When we employ it as a tool, we extend our capabilities but must focus on the skills needed to guide it. When we accept it as a cognitive prosthetic, we amplify our mental abilities in some ways, but may become dependent on its support in others.

The choices we make about agency distribution will shape not just the future of AI development, but the future of human capability and consciousness itself. As we climb this ladder of artificial agency, we must carefully consider what we gain and what we might lose at each step. The ultimate question becomes not just what our AI systems can do, but what kind of agents – both human and artificial – we want to be.


By George & Claude


footnotes


Original published: January 8, 2025