This essay is telescopic. It can shrink or expand, depending on how much attention you are willing to give.
This is the original version (849 words).
Show other sizes.
Your Next Job Is Clicking Next
There’s an almost Zen-like simplicity to the "Next" button. We encounter it countless times: installing software, navigating a tutorial, progressing through a slideshow. Click. Advance. Click. Advance. It’s a gentle nudge forward, a promise of progress with minimal cognitive load. I’ve been thinking about this button a lot lately, not as a mere UI element, but as a profound metaphor for a shift in how we work, create, and perhaps even think. What if our next job, in many domains, is simply… clicking next?
Alfred Whitehead once observed that "Civilization advances by extending the number of important operations which we can perform without thinking about them." There’s a deep truth to this. We don’t churn our own butter or weave our own cloth, for the most part. We’ve outsourced these operations, freeing ourselves for other pursuits. The "Next" button, in its digital ubiquity, feels like a tiny, unassuming agent of this civilizational advance. It takes a complex process – say, the configuration of a new application – and breaks it down into a series of manageable, pre-digested steps. We trust that someone, or something, has done the heavy thinking upstream.
This delegation is seductive. It offers efficiency, a streamlined path from A to B. I recall the early days of learning complex software, the painstaking exploration of menus and manuals. Now, onboarding wizards guide us with a gentle hand, a sequence of "Nexts." We arrive at a functional state faster. But I wonder what is lost in this haste, what understanding isn't built when the journey is so smoothly paved. Do we truly learn the software, or do we merely learn to traverse its introductory corridor?
The question becomes more pressing as we venture deeper into the age of AI. Consider the AI image generator that asks "What next?" after each iteration, or the writing assistant that suggests the next sentence, the next paragraph. The "Next" here isn't a literal button, but a conceptual one. We provide a prompt, the AI generates, and we then guide its next step: refine, elaborate, try a different style, accept. Our role shifts from primary creator to curator, to a steersman on a powerful, semi-autonomous vessel.
I see this pattern emerging in coding, too. With tools like GitHub Copilot, the machine suggests the next block of code, the next function. The programmer's role can become one of validating, tweaking, and stringing together these AI-generated segments. The act of "clicking next" here is accepting the suggestion, or prompting for an alternative. The cognitive load of raw construction is reduced, but a new kind of cognitive skill emerges: the art of effective "nexting."
This isn't necessarily a dystopian vision of de-skilled humans. The quality of the "next" click, the intelligence behind that simple action, becomes paramount. If an AI presents three potential continuations of a design, which one do you choose? If it offers ten variations of a sentence, which one best serves the deeper intent? The "next" decision, while seemingly trivial, can be a moment of profound strategic or aesthetic judgment. It’s the difference between an editor and a proofreader, a conductor and a metronome.
The challenge, as I see it, is twofold:
- Maintaining Deep Understanding: If we are only ever clicking "next" on pre-packaged solutions, how do we retain the ability to build from first principles when the automated path fails or leads to an undesirable outcome? The sailor who only knows how to follow a GPS may be lost when the signal dies.
- Cultivating Meaningful Agency: If our primary interaction with complex systems is reduced to a series_of_ "nexts," how do we maintain a sense of authorship, of meaningful contribution? There's a satisfaction in wrestling with a problem, in the messy, iterative process of creation that goes beyond simply selecting from a set of well-formed options.
Perhaps the "next job" isn't just about the click itself, but about designing the sequence of "nexts." It's about architecting the workflows that AI can navigate, defining the checkpoints where human judgment is most crucial. It’s about becoming discerning consumers and shapers of AI-generated possibilities, rather than passive recipients.
I’m reminded of the difference between following a recipe and understanding the chemistry of cooking. The recipe is a series of "nexts." Understanding the chemistry allows you to invent new recipes, to adapt when ingredients are missing, to truly create. As AI handles more of the "recipe-following," our value might lie in becoming the master chefs who understand the underlying principles, who can taste the digital soup and know exactly what "next" is truly needed.
The "Next" button, then, isn't just an instruction; it's an interface to a vast, delegated intelligence. Our interaction with it might seem simple, but the implications are anything but. It forces us to ask what we want to keep doing ourselves, what forms of thinking we want to preserve, and how we can ensure that "clicking next" is an act of empowered direction, not just passive progression. The future may indeed involve a lot of "nexting," but the wisdom will lie in knowing why, and where we intend to go.
Original published: May 11, 2025