Skip to main content

The Permanent and the Provisional: What AI Cannot Do to Us

·2195 words
Miles Wallace
Author
Miles Wallace

The Permanent and the Provisional: What AI Cannot Do to Us There is a particular kind of anxiety that runs through every serious conversation about artificial intelligence and work. It is not the anxiety of science fiction, the fear of conscious machines that want something from us. It is quieter and more specific than that. It is the fear of being made redundant not by something that hates us, but by something that simply does not need us. That the skills we spent years acquiring, the roles we shaped our identities around, the expertise we traded decades of effort to develop, that all of it might be dissolved, not by malice, but by efficiency.

Chip Huyen and Marina Wyss are both practitioners in the field that is doing the dissolving. They write from inside it, with the credibility of people who have built real things and navigated real consequences. Yet what is most striking about their work, read together, is not the anxiety. It is the absence of it and what that absence reveals about where automation’s actual limits lie.

Huyen’s AI Engineering is, on its surface, a technical manual. It walks through foundation models, evaluation pipelines, retrieval-augmented generation, finetuning, inference optimization and system architecture. Although running beneath all of that technical content is a sustained argument about the nature of automation itself, what it reaches and where it reliably fails to reach. The most direct statement of this argument comes in the chapter on dataset engineering. Huyen is discussing synthetic data, the practice of using AI to generate training data for other AI systems and she acknowledges that this capability has genuinely transformed what is possible. Tasks that once required armies of human annotators can now be seeded, generated and processed at scale. The automation is real and its impact on certain categories of labor is real. Then she draws the line: “Many steps in dataset creation aren’t easily automatable. It’s hard to annotate data, but it’s even harder to create annotation guidelines. It’s hard to automate data generation, but it’s even harder to automate verifying it. While data synthesis helps generate more data, you can’t automate thinking through what data you want. You can’t easily automate annotation guidelines. You can’t automate paying attention to details.”

This is a precise and important observation, and its logic extends far beyond dataset engineering. What Huyen is describing is a consistent pattern in how automation interacts with complex tasks: it captures the execution while leaving behind the intention. You can automate the act of labeling a dataset; you cannot automate the judgment about what the labels should mean, or whether the labeling scheme is capturing the right signal in the first place. You can automate generating examples; you cannot automate deciding what an example should demonstrate. The further you travel toward the origin of a task, toward the decisions that determine what the task is for, the less tractable automation becomes.

This is not a permanent technological limitation. Huyen is not claiming that AI will never be able to do these things. She is describing the current landscape and she is doing so with the careful hedging of someone who has watched the landscape shift rapidly enough to distrust confident predictions. Yet the pattern she identifies is consistent enough to be instructive: automation advances fastest along the axis of execution and slowest along the axis of meaning. The more a task requires understanding why it matters, not just how to perform it, the more it resists replacement.

Her treatment of agents makes the same point in a different register. Agents, in her framing, are AI systems capable of multi-step planning and tool use, systems that can, in principle, execute complex workflows with minimal human intervention. The applications are substantial. Code copilots that navigate entire codebases. Research assistants that synthesize hundreds of documents. Workflow automators that handle data extraction, entry, and lead generation. These are real capabilities with real displacement potential for certain categories of knowledge work. Huyen is careful about what this means. “The more automated the agent becomes,” she writes, “the more catastrophic its failures.” The relationship between automation and risk is not linear; it is multiplicative. A system capable of acting across many steps without human intervention is also a system capable of propagating an error across many steps without human correction. The expansion of agent capabilities does not reduce the demand for human oversight; it transforms it. What gets displaced is routine execution. What remains and what becomes more critical, is the capacity to design systems, evaluate their behavior, catch their failures, and decide when to trust them. This is the shape of the automation that Huyen maps: not a replacement of human work but a compression of it, pushing human attention toward the parts of every task that require judgment, intentionality and contextual awareness. The middle layers of execution get automated. The top layer, the layer of meaning, design and accountability, does not.

Marina Wyss arrives at a similar conclusion from a completely different direction. Where Huyen begins with architecture and works toward implications, Wyss begins with her own life and works outward. The life she describes is instructive precisely because of how improbable it looks in retrospect. She went from managing a jewelry store to Senior Applied Scientist with nearly an eightfold increase in income. By the usual logic of career advice, this trajectory should be explicable in terms of degrees earned, skills developed, habits cultivated, opportunities seized. Wyss does not deny that these things mattered. Alas she identifies a different variable as decisive: “Having a partner who believed in me, moved across the world for my degree, and held things together when I had nothing figured out changed the entire trajectory of my life. That kind of support is a genuine competitive advantage, professionally, financially, and emotionally, and almost nobody talks about it.”

This is a surprising thing to say in a professional newsletter about AI skills. It is also, on reflection, exactly the right thing to say. Because what Wyss is identifying is the infrastructure that makes risk-taking possible and risk-taking is precisely what a rapidly changing technological landscape demands of the people trying to navigate it. The conventional career narrative, particularly in ambitious professional circles, tells people to build themselves first. Achieve stability, develop expertise, construct a foundation. Then introduce complexity, in the form of relationships, dependents, obligations. The logic is appealing because it sounds like prudent sequencing. Although Wyss points out that it is historically unusual and often self-defeating because the things it asks you to defer. Moreover, she described of partnership, mutual growth, the deep stability that comes from having navigated difficult things together, are precisely the things that make you capable of sustained professional risk-taking in the first place.

Her own trajectory is evidence for this. She did not arrive at her marriage polished and complete, looking for a partner to fit into an already-built life. She arrived uncertain, underpaid, with no clear direction. Her husband took a risk on her and she on him. He moved across the world. He supported them financially while she earned almost nothing during her graduate program. He put his own career growth on pause while she figured out hers. Result, eleven years later, is not just professional success but something she describes as structural: a combination of flexibility and deep stability that is “not something you can manufacture after the fact.”

What Wyss is describing, in the language of personal narrative, is a form of human capital that no automation can produce and no credential can substitute for. The support system, the relationship built through shared risk, the deep trust that comes from having grown together through uncertainty, these are not soft additions to a professional toolkit. They are, in her experience, foundational to the capacity for growth itself.

This connects more directly to the AI conversation than it might initially appear. The essential AI skills she argues everyone needs in 2026 are non-technical ones. She does not specify them in the excerpted text; she points toward a video and a blog post. The argument behind them is clear from the surrounding content. In a landscape where technical skills have a shorter half-life than ever before, where the specific tools you learn today may be irrelevant in three years. Durable competitive advantage lies in the capacity to adapt; to learn continuously, to take risks, to navigate ambiguity, to build and maintain the relationships that sustain you through transitions. These are not skills in the conventional sense. They are dispositions and they are built through experience, not instruction.

The rhetorical distance between Huyen and Wyss is considerable, and reading their language closely is itself instructive. Huyen writes in the hedged, systematic register of technical documentation. Her sentences are qualified by default, “often,” “typically,” “can,” “in many cases,” because she is describing a field where overgeneralization carries real professional risk. She structures each chapter around the question it addresses and the question it leaves open, using rhetorical scaffolding to orient a reader before delivering content. Transitions are explicit and forward-pointing: “the next chapter will explore,” “this chapter ends with.” The prose is designed to be navigated as much as read, yielding its value to a practitioner under pressure who needs orientation quickly. Even her most emphatic claims are framed as patterns rather than laws. She is authoritative without being declarative, and the effect is a voice that earns trust by consistently acknowledging its own limits.

Wyss writes in the confessional mode of the personal essay. She does not hedge; she testifies. Her sentences are short, her assertions direct, and her rhetorical unit is the story rather than the framework. Where Huyen builds an argument by stacking qualified claims, Wyss builds one by narrating experience and letting the reader draw conclusions. Her newsletter pivots from AI skills to marriage without apology, because in her framing the two topics are not separate; both are about what it takes to build a life that can withstand rapid change. The emotional register is warm and specific in a way Huyen’s never is and this is not a weakness. It is the mechanism by which Wyss reaches readers who would disengage from a technical framework and it allows her to make claims about human infrastructure that a systematic voice could not carry with the same weight.

What is notable is that both writers, from these very different starting positions, arrive at versions of the same conclusion. Huyen, through technical analysis, identifies the parts of every task that resist automation: the intentional, the contextual, the judgmental, the attentive. Wyss, through personal narrative, identifies the conditions that allow people to develop and sustain those capacities: trust, support, the willingness to grow with someone before either of you has it figured out. Huyen maps what automation cannot reach. Wyss describes what makes humans capable of occupying that space.

Together they outline something that neither states directly: that the future of work in an AI-saturated economy is not primarily a technical problem. It is a human one. The question is not only which skills will remain valuable, though that question matters, but what conditions allow people to develop and sustain those skills over time, through the disruptions and transitions that a fast-moving technology inevitably generates. The answer to that question cannot be found in any model, however large.

It has become fashionable to describe the current moment in AI as unprecedented, and in certain narrow technical senses it is. The capabilities of foundation models represent a genuine discontinuity. The range of tasks that AI can perform, and the speed at which that range is expanding, has no historical parallel. But the underlying human problem, how to build a life and a career that can adapt to change, how to identify the skills and capacities that remain valuable under uncertainty, how to cultivate the relationships and support systems that make risk-taking possible, is not new at all. Every generation has faced some version of it. The specific form changes. The structure does not. Huyen’s work is valuable because it maps the specific form with unusual precision: here is what foundation models can do, here is where they fail, here is what the field of AI engineering requires of the people who work in it. Wyss’s work is valuable because it points toward the structure beneath the form: the human capacities and conditions that determine whether any individual can actually take advantage of what the map describes.

Neither writer is offering false comfort. Both are clear-eyed about what AI will displace and what it demands. Neither is capitulating to the anxiety that drives so much conversation about AI and work. The reason, in both cases, seems to be the same: they have both spent years doing the actual work and they know, from experience, that the parts of it that matter most. The judgment, the creativity, the care, the relationships that sustain effort over time are not things that a model generates.

They are things that people build. Slowly, imperfectly and always in relation to other people. That is what automation cannot reach. Ultimately, what makes the conversation about automation worth having.