Testing this next generation of AI coding tools, I gave one a prompt for a fun website concept. It understood the vision, set up the structure, populated the pages—and in the time it took me to run an errand, I had something I could publish.
That's not a parlor trick. It's a real change in what "making" feels like.
It also sets a trap.
Because the output was ready to launch—as long as I was ready to launch what it chose. The moment I asked for changes, the experience shifted. Not because the work got worse, but because it started to wobble. What had felt coherent became negotiable. What had felt crisp became open to reinterpretation.
The through-line in many vibe-coding success stories is that the shipped thing succeeds in isolation. It works inside its own assumptions. It resolves the surface. It satisfies the immediate brief.
The moment you ask it to persist—across revisions, across integrations, within real constraints—the wobble returns.
Stand-Alone Good vs. Sustained Direction
One of the most important boundaries in modern product work was the line between design and engineering. Not because designers couldn't code or engineers couldn't design, but because scale required specialization. Artifacts moved between teams. Clarity had to survive handoffs.
Designers weren't just drawing screens. They were defining the conditions that made those screens buildable: what happens on mobile versus desktop, what happens when data is missing, what happens when the user does something unexpected, what a system should refuse to do.
Those weren't bureaucratic concerns. They were what kept "the demo" from becoming the product.
I've seen what happens when that distinction collapses in the wrong way.
A senior executive, frustrated with internal pace, was taken advantage of by some outside contractors. The prototypes looked incredible in walkthroughs. Everything worked—on the path they demonstrated. The speed was intoxicating. The confidence was contagious.
Then the work moved from demonstration to reality.
The contractor's prototypes weren't designed to integrate. They relied on micro-data and ideal conditions. The boundary decisions—the constraints, the edge cases, the systems thinking—were still unmade. That burden fell to the internal team, and suddenly what had taken days to demo took weeks to ship.
The demo wasn't wrong. It was incomplete in a specific way: it hadn't been forced to hold together under real constraints.
AI amplifies that pattern. It doesn't create it.
If your goal is a thing that works in isolation, AI is extraordinary. But a thing that works in isolation is a different achievement than a thing that holds its shape over time. And the moment the work has to survive iteration—not just generation—the weakness isn't quality.
It's drift.
Drift Under Refinement
Drift rarely looks dramatic. It doesn't arrive as failure. It arrives as improvement.
I've seen it most clearly in reviewing advertising work, where distinctiveness is the entire point. A concept lands well. The tone has edge. The structure feels intentional. It says something specific, in a way that doesn't feel generic. You're not chasing novelty—you've found alignment.
Then comes refinement. Tighten the headline. Clarify the promise. Strengthen the call to action. Adjust the pacing.
The next round returns polished. The changes are all made as requested. But it's still not ready to approve—now something else is off. Another round. And another. The loop lengthens, even though everyone is doing the work.
It's not that the team isn't taking the requested actions. It's that the surrounding decisions weren't protected. The voice shifts slightly toward safety. The structure recenters around familiar patterns. The sharper phrasing becomes broadly acceptable. The interesting tension smooths out.
Not worse, exactly. Just less itself. Each individual choice can be defended. Each change is reasonable. But taken together, the work slowly migrates toward the center—toward what is most likely to be acceptable across contexts.
This is a human problem. It predates AI by decades. Anyone who has shepherded creative work through rounds of revision recognizes it.
What AI changes is the speed. The same drift that used to unfold over weeks of committee feedback can now happen in an afternoon of iterative prompting. Each revision addresses what you pointed at. Each revision also quietly renegotiates what you didn't.
Retained Intent
Here's what makes drift so hard to fight: the decisions that gave the work its edge were often invisible to the person who made them.
When a piece of work really lands—when the tone is right, the structure holds, the tradeoffs feel purposeful—it's tempting to think you know why it works. You can point to the headline, the layout, the palette. But underneath those visible choices sits a layer of decisions you probably never articulated. Why that level of formality and not one notch higher. Why the secondary information was subordinated instead of removed entirely. Why the pacing felt earned rather than rushed. Why you left a certain tension unresolved instead of smoothing it into comfort.
Those micro-decisions accumulated into a point of view. They're what made the work feel chosen rather than assembled. And most of them were never conscious enough to protect, because you made them in the flow of making—through taste, through instinct, through the accumulated weight of everything you've designed before.
That's the problem retained intent actually names. Not "remember what I told you." Something harder: protect decisions I never explicitly made.
When you're the one making the work, this is invisible. You are the continuity. Your hands carry forward what your mouth couldn't articulate. You feel a violation before you can explain it, and you fix it without needing to justify the fix.
When a system is generating alongside you, that continuity breaks. Not because the system is careless, but because it has no access to the layer where your real decisions live. It can follow your instructions precisely and still lose the thing that mattered—because the thing that mattered was never in the instructions. It was in the texture between them.
This is why "write a better prompt" has a ceiling. A prompt can capture what you know you decided. It can't capture what you haven't surfaced yet. And the decisions that make work distinctive are often the ones you didn't know you were making until you see them undone.
Retained intent, then, isn't a feature request. It's a design problem. The most important one AI introduces: how do you preserve coherence when the person who holds the vision and the system doing the work don't share a memory?
Governance Is a Creative Act
"Governance" sounds bureaucratic.
In practice, it's the answer to that question—and it's more creative than it sounds.
A boundary isn't a restriction. It's a decision made once instead of re-litigated a hundred times. A constraint isn't a limitation. It's identity preserved across variation. A refusal isn't negativity. It's coherence.
But here's what I didn't expect: governance isn't just encoding rules you already know. It's the process of discovering what you actually believe about the work. When you sit down to define what must not drift, you find yourself articulating things you've always felt but never needed to say. We prioritize clarity over cleverness. We refuse hype language. We prefer tension to politeness. We do not collapse complexity into false simplicity.
Those aren't prompts. They're excavations. They surface the invisible layer—the accumulated taste and instinct that lived in your hands—and make it legible to a system that can't read your mind.
And it doesn't end there.
The best governance isn't set once and left alone. It grows through contact with the work. You encode what you can before you start, and then the work teaches you what you missed. A revision comes back and something feels off—not wrong in a way you anticipated, but wrong in a way that reveals a principle you hadn't named yet. That's a signal, not a failure. You document it. The boundary set gets sharper.
It works in the other direction too. Sometimes you see something survive three rounds of revision untouched—holding its shape while everything around it changed—and you realize it's load-bearing. Worth naming. Worth protecting explicitly the next time, instead of hoping instinct catches it again.
Governance is a living layer. It starts as what you know, and it becomes what you've learned.
What caught me off guard was realizing I'd always had this — I just never had to name it.
When I built for myself, my taste was the constraint. My non-negotiables lived in my head. I didn't articulate them because I didn't need to: I was the memory layer. The work ran through me, and I corrected drift before it registered as drift.
AI changed that quietly. Not because it ignores direction — because it pursues it, constantly, in every direction it thinks you'd approve of. It resolves ambiguity. It moves forward. It makes calls on your behalf based on whatever signal it has available. And it is very good at making those calls feel like progress.
Which means without encoded boundaries, you're not steering. You're reacting — chasing each revision with a prompt that addresses what you can see, while the decisions you can't see keep getting made.
Governance is what makes the difference proactive instead of reactive. For me, that often ends up as markdown — README-like documents for the AI that aren't prompts so much as standing orders. Not because the system needs hand-holding. Because without encoded intent, autonomy becomes expensive.
Not because output collapses.
Because it converges.
And convergence is the enemy of distinctiveness.
When generation is abundant, direction is scarce.
The Designer in the Loop
The system needs you upstream.
Not to approve outputs. Not to catch drift after it's happened. But to define the field in which acceptable solutions are allowed to exist — the perimeter that determines what "good" is even allowed to mean for this work.
That requires a specific skill, and it's not the one designers were trained for.
For most of a designer's career, the work moved in one direction: words became visuals. A brief became a layout. A strategy became a brand. An insight became an interface. The craft was in the translation — taking language and turning it into something you could see, touch, use.
Now the critical motion is the reverse. The system produces the visual, and the designer's job is to look at what it made and articulate — in words, precisely — what's working, what's wrong, and why. Not "this doesn't feel right." Not "make it more premium." But: the voice has shifted from direct address to institutional distance, and that undermines the intimacy we need for this audience. The hierarchy is correct but the rhythm is monotonous — every section lands with the same weight, which flattens the argument. The palette reads as safe when the brand posture is supposed to carry an edge.
That's the new muscle. Reverse-engineering your own taste into language specific enough for a system to act on. Not describing what you want in the abstract — that's a brief, and AI is already good at executing briefs. Describing what you're seeing in the concrete, with enough precision to protect what's working and redirect what isn't.
It's a harder skill than it sounds. Designers have spent years developing the instinct to recognize when something is off. They have not spent years explaining why in the kind of operational language that translates to boundaries. That gap is where the real growth is.
If your ambition is stand-alone good, you may not need to be in that loop. The system can get there without you. But if your ambition is sustained, coherent, distinctive work — across revisions, across systems, across scale — then someone has to hold the boundary. Someone has to name what this protects and what it refuses to become.
For most of your career, you did this without naming it. Your taste was the constraint. Your instincts were the boundary layer. You held the non-negotiables in your head because you were the continuity.
That's still the job.
It just has to be explicit now.
