A client recently told me they didn't want me using AI on their project.
My immediate reaction surprised me: a flash of panic, not because I couldn't solve their problem (complexity challenges always excite me), but because I'd been operating at a scale I couldn't fathom how to sustain manually anymore.
Last December, Anthropic conducted what they believe is the largest qualitative study ever: 81,000 people across 159 countries and 70 languages interviewed by an AI about how they use AI, what they hope it could do, and what scares them about it. The patterns they found mirror exactly what I discovered when my client removed my scaffolding.
I wasn't worried about my judgment or lack thereof. I knew exactly what they needed. I could still diagnose problems, recommend and prioritize solutions, and deliver strategic guidance like a kid with slides (keyword: having fun while being good at sliding). But I realized I'd externalized something crucial: the infrastructure that let me work at velocity across multiple contexts simultaneously.
What I'd Lost (And What I Hadn't)
Here's what disappeared when AI was off the table:
Spell check and prose tightening. Something as simple as spell check and prose tightening. For a foreign kid whose English is not my first language, you'd understand!
Working memory during and after sessions External cognitive scaffolding that lets me hold multiple threads without dropping any
Context switching across multiple projects Centralized knowledge management that made juggling a few client contexts feel natural instead of exhausting
Multi-project capacity The ability to maintain quality across a portfolio instead of being limited to one thing at a time
Native AI working environment The baseline I'd been building from without realizing it had become my foundation
Here's what stayed completely intact:
Problem diagnosis ability
Strategic recommendations
Domain expertise
Taste and quality bar
The judgment that comes from 15+ years of doing this work
The ability to say "I'll be honest with you" and mean it
It's like discovering you can't parallel park anymore because you've been driving a car with backup cameras for three years. The spatial reasoning is still there. The understanding of how cars work is still there. But the mechanical execution? Gone. And honestly? Good riddance to the bottleneck. I'm ready for the self-parking cars.
According to Anthropic's recent study of 81,000 people across 159 countries, 50% of respondents mentioned time-saving as a benefit of AI. But when the AI interviewer pushed deeper on underlying reasons, people revealed it wasn't about working faster. It was about quality of life and operating at scale without burning out.
The scaffolding wasn't making me "faster." It was letting me work at a different altitude across multiple contexts.
What This Has To Do With Design Systems
Take an example of asking a construction worker to build a skyscraper without cranes. They still know how to build. The blueprints make sense, they understand load-bearing walls, they can spot a structural problem from a mile away. But they literally cannot operate at the same scale or height without equipment.
And here's the thing: we don't call that a skills deficit. We don't say "wow, modern construction workers are so dependent on cranes, they've lost the ability to build!" We understand that the tools unlocked a different scale of possibility. The Empire State Building wasn't going to happen with rope pulleys and elbow grease, no matter how skilled the workers were. At the same time, the worker now is expected to design blueprints and build with these new tools. The scaffolding didn't reduce expectations, it expanded the job description.
That's what happened to me. The scaffolding didn't reduce expectations. It expanded what I could handle. And when it got removed, I couldn't just "go back" to the old way. The old way doesn't work at the new scale.
In a recent development project, we were debating the role of a System Architect versus the Build Development team. Who's responsible for writing out the blueprint when it requires both systems knowledge and technical implementation expertise? The System Architect knows the big picture but isn't in the code daily. The dev team knows the code but may not see the full system implications.
The answer: you need someone who operates at both levels. Someone who can design the scaffolding and understand how people will actually climb it. Someone who architects the adoption, not just the system. Or, a shared memory that allows anyone to create and update blueprints.
That's the bridge I was missing. When my client removed AI, they weren't just removing a tool. They were revealing that I'd built my entire practice around cross-domain scaffolding. I wasn't just a product lead or a designer or a strategist or a technologist anymore. I was architecting systems that let me operate across all of them simultaneously. A shared language that switches up by context. These are ways of doing for the X-shaped individuals.
A design and product system is like a language. A language can only survive when it is constantly being passed on, used, and expanded. The most beautiful grammar rules in the world mean nothing if no one speaks the language. The most elegant design system components mean nothing if they're not adopted, maintained, and evolved by the people who use them daily. And that's exactly what breaks in organizations with design systems.
The Design System Adoption Problem
Your client opting out of AI = a designer or a PM forcing the team to work without the design system. (I rarely met an Eng who doesn't love a design system. Most of the times, they're begging for one!)
Picture this: You've built a beautiful MCP-connected design system. Figma components auto-sync to code. Documentation updates in real-time. Design tokens stay consistent across platforms. Usage analytics show what's actually being used. It's infrastructure engineering at its finest.
It works brilliantly... for the people who use it.
But then:
One PM insists on building outside the system "just this once" (narrator voice: it's never just once)
One designer doesn't trust the auto-sync, manually exports assets like it's 2019
One engineer bypasses the component library because "it's faster to write custom CSS" (is it though?)
One stakeholder demands designs in PowerPoint instead of Figma because that's what they've always done and they're not changing now, thank you very much
The scale gets impressive FAST, but the divergences are that much more disruptive. Suddenly you have the same problem I had with that client.
You've built scaffolding, but the team is still working at ground level. And when you have to collaborate with them, you either:
Dismantle your scaffolding — work slower, more manually, more error-prone
Maintain two workflows — exhaust yourself translating between systems
Watch the infrastructure rot — people stop using it because it's not universal
Quality doesn't necessarily degrade (judgment is intact), but velocity and scale collapse. The infrastructure investment becomes wasted because adoption isn't universal.
Think of it like a band where half the musicians read sheet music and half play by ear. Both can make music. But when you try to rehearse together, the sheet music readers are waiting for charts while the ear players are already improvising. Eventually, everyone's frustrated, the music director has to run two separate rehearsals, and people start asking "why did we even bother standardizing if half the band is still doing their own thing?" That’s not jazz, that’s bad music. The infrastructure rots not because it's bad, but because it's not universal.
The Anthropic study backs this up: Benefits are experienced (91% of those who mentioned learning benefits had realized them) while harms are often hypothetical (only 46% of those worried about atrophy had seen it). The PM bypassing your system is avoiding hypothetical harm (loss of control, slower iteration) while designers using it experience real benefit (consistency, speed, fewer errors). The adoption gap is a trust gap between experienced benefit and feared harm.
Why Infrastructure Rots: The Four Failure Modes
1. Adoption treated as opt-in, not infrastructural
The system is positioned as "available" rather than "baseline." No enforcement when people bypass it. Routes around the system are tolerated as individual preference instead of organizational anti-pattern.
2. System judged on perfection, not comparison
"The design system doesn't have the component I need" → immediate bypass.
Nobody asks: "Is building custom faster/better than requesting the component?" Edge cases are used to justify abandoning the whole system.
It's like refusing to use your kitchen because you don't have the exact spice a recipe calls for, so you might as well just order takeout forever. At some point, it's just 'cause one hates cooking.
3. Usage measured, not adoption
Teams track: "How many components in Figma?" "How many times was the library accessed?"
Less measured: "What % of the team uses it universally?" "Where are the bypass routes?" "What's the delta between 'system available' and 'system is the default'?"
You can't see the rot until it's flimsy.
4. Infrastructure fails at the edges where autonomy is lowest
Senior ICs with autonomy adopt eagerly. They see the benefit, they have control over their workflows, they want the scaffolding.
Junior designers, contractors, cross-functional partners bypass it. They're either forced to use it (breeding resistance) or not empowered to co-build it (breeding workarounds).
The system rots from the periphery inward.
The Anthropic study shows this pattern clearly: Independent workers (freelancers, entrepreneurs) experienced economic empowerment from AI at 47% versus 14% for institutional employees. Infrastructure adoption works when people have autonomy. In organizations, people with the highest autonomy adopt. Those forced to use it route around it. The system rots at the edges where autonomy is lowest.
The Product System Enabler's Job Is Architecting Adoption, EVEN MORE SO THAN ARCHITECTING THE SYSTEM
Not just building systems. Architecting adoption.
Here's the framework:
Phase 1: Build scaffolding that's genuinely easier than the alternative
If bypassing is faster than adopting, the system fails (not the user).
Infrastructure must integrate with actual workflows, not ideal workflows. The system must work across all tools the team actually uses — not the tools you wish they used.
Phase 2: Make the system the path of least resistance
Using the system must be easier than not using it. Bypassing must create visible friction. The default state should be "in the system" not "opting in."
This isn't about forcing people. It's about design. Make the right thing the easy thing.
Phase 3: Enforce at the workflow level (not at the individual level)
PRs without design system components get flagged in review
Designs outside Figma library get questioned in critique
Stakeholder deliverables must pull from the system
Define a process for new component reviews and commits so that more than the system's original owners feel the ownership and accountability to keep the system up to date
If someone can't use the system, that's a system failure signal, not a discipline problem
You're not policing individuals. You're democratizing the workflow.
Phase 4: Instrument adoption, not just usage
Track:
% of team using system vs. total team
Where people bypass and why
Delta between "system available" and "system universal"
Treat bypasses as system failure signals to fix, not resistance to overcome.
Phase 5: Design for adoption across autonomy levels
Power users will adopt voluntarily (they have autonomy). Your job: make it feel volitional even when required.
Show benefit before enforcing standard. Make the "why" visible before the "how."
The Anthropic study found that AI's benefits are strongest when learning is volitional, not forced. Tradespeople showed 45% learning benefits with only 4% cognitive atrophy (volitional adoption). Students had 50%+ learning benefits but 16% atrophy (forced adoption). As an enabler, make adoption feel volitional even when it's required.
Don't be Surprised aka. Why Your Power Users Complain Most
Here's something counterintuitive from the Anthropic study: Unreliability was the only tension where harm overshadowed benefit (37% of people worried about it vs. 22% who cited decision-making benefits). Yet lawyers — who experienced unreliability at nearly twice the average rate — also reported the highest decision-making benefits.
They complain the most and use it the most.
Translation to design systems: Your power users (senior designers, design leads) will hit the system's limitations hardest. They'll complain the most about missing components, edge cases, sync failures. But they're still using it because the benefits outweigh the costs.
The trap: If you only listen to complaints, you'll think the system is broken.
The reality: The most sophisticated users are your canaries. Their complaints are system failure signals, not evidence to abandon the infrastructure.
The enabler's job:
Instrument where the system fails
Fill the gaps systematically
Keep iterating based on power user feedback
Allow for power users to contribute to the system in meaningful ways
Don't abandon infrastructure because advanced users found edge cases
Treat bypasses as system failure signals that make their way to system roadmap
What This Means For Any System Manager Role
As a Product System Enabler, I'm not just building design systems or AI workflows or component libraries.
I'm architecting adoption:
Across autonomy levels (senior ICs to junior designers to cross-functional partners)
Across tool ecosystems (Figma to code to docs to stakeholder deliverables)
Across resistance types (hypothetical fear vs. experienced limitation)
The scaffolding isn't optional equipment. It's the new ground level.
Core principles I'm learning and building in recent projects:
Build infrastructure that's genuinely easier than alternatives
Make bypassing harder than adopting (through workflow integration, not mandates)
Measure adoption, not just usage
Treat bypasses as system failure signals
Design for volitional adoption even when enforcement is required
Instrument power user complaints as canary signals, not abandon-ship votes
Infrastructure that isn't adopted universally operates at the lowest common denominator. The goal isn't "we have a design system" — it's "working outside the system is harder than working inside it."
That client who opted out of AI didn't just slow me down. They showed me what happens when infrastructure isn't universally adopted. You can't just "work a little slower." You have to dismantle your entire workflow to match ground level.
As a Product System Enabler, my job is to make sure teams never have to make that choice. We build the scaffolding everyone uses, or we're not actually building infrastructure. We're building optional tools that will rot. And a girl's got to feel motivated, so let's build things that actually get used.
P.S. On the other hand, I did discover something interesting: without AI in the room with the client, I had to be more intentional about what the key takeaways were and what was being said. I retained a lot more information than I usually do. There are genuine benefits to working without AI in certain contexts. But that's a topic for another article — one about when not to use the scaffolding.
This is Part 5 of an ongoing series on X-shaped people and the future of creative teams. Read Part 1: The X-Shaped Individual: Solving for Problems in 3D and Part 2: It's Only Through Doing That You Become: How X-shaped people are made — and how teams can grow them and Part 3: Your Design System Isn't a Style Guide Anymore — It's AI Infrastructure and Part 4: How Do We Price Human Judgment When 5 Hours Turns Into 30 Minutes
Thu Do is a hands-on product owner with 10+ years bringing products from 0-to-1 across startups, Fortune 500 consultancies (BCG, PwC), and innovation studios. She helps early-stage to early-growth companies ($1-10M ARR) and innovation teams turn big visions into competitive market-ready products and services through human-centered design, product alignment, and AI innovation. This article originally appeared on Thu's Tech Dialect. Find her on LinkedIn.
Co-created with Claude | Based on insights from Anthropic's 81,000-person study on AI usage
