Yesterday, I watched myself paste a planning document into Claude Code, ask it to compare against 50+ Linear tickets, generate a gap analysis, and create missing tickets with proper dependencies—all in under five minutes. Work that used to take me three hours on a Tuesday afternoon.
Boris Cherny (the guy who created Claude Code) went viral last week for sharing his workflow—running 5 AI agents in parallel like he's commanding a small army. The engineering world lost its mind. "This is the future of coding," they said.
But here's what nobody's talking about: this isn't just a coding thing. This exact pattern works for design, product, consulting—any knowledge work where you're orchestrating complexity rather than cranking widgets.
I've been running a version of Cherny's workflow for design and product work for the past 18 months. The 5x productivity gains are real. But the gains didn't make me a productivity machine. They freed me to do the work I was always supposed to be doing: the human stuff.
Let me show you how.
1. Work Inside AI Environments (Not Just "Use AI Tools")
Here's the shift that changed everything: I stopped working in Google Docs and importing stuff to AI for edits. I started working inside AI interfaces where thinking, drafting, testing, and documentation happen simultaneously.
Right now, as I write this article, we're building it in a Claude Artifact. The outline we created earlier? Still here. Every iteration? Visible. The context from our entire conversation? Persistent. When we're done, this working document is the final document.
Compare that to the old way:
Think (in my head, maybe a whiteboard)
Draft (Google Doc)
Get feedback (comments, meetings, Slack threads)
Revise (back to Google Doc)
Then maybe use AI to polish
Document the process separately (if I remember)
Versus now:
Think + draft + test + document all at once, inside the AI interface
Context builds cumulatively (AI remembers everything we discussed)
Iterations are instant
The output is immediately shareable
It's like the difference between cooking a meal where you write down the recipe as you go versus trying to remember what you did after the fact and hoping you can recreate it.
The collaboration friction nobody talks about:
Here's the problem: most people aren't ready for this. Clients still want Google Docs. Teammates need comment threads. And AI platforms themselves are kinda broken for collaboration—Claude and ChatGPT both make you start a new thread every time you say "Hi." Who designed this?
So I'm working inside an environment that 10x's my speed, while everyone around me is still in the old workflow. It's like I'm driving an EV and keep having to pull over to explain to everyone why we don't need gas stations.
2. Multi-Agent Orchestration (Or: My FigJam Looks Like a Conspiracy Theory)
Here's where it gets interesting. I'm not just using one AI—I'm running multiple agents in parallel, each with a specific job.
I've been doing this weird thing lately where I stare at my AI agent FigJam like I'm planning a heist:
Chief of Staff Agent coordinates research, synthesis, keeps the timeline straight
Content Agent handles first drafts and iterations
Knowledge Hub remembers that thing from three projects ago that's suddenly relevant
Presentation Agent builds the deck while I'm thinking through the strategy
New Business Agent scouts leads on LinkedIn while I'm sleeping
Each agent has its lane. I'm the coach making sure they're all moving towards the same goal.
Boris Cherny runs 5 Claude instances in his terminal with system notifications telling him when each one needs input. I'm doing the same thing, just spread across tabs and projects instead of terminal windows. When Content Agent finishes a draft, I review it. When New Business Agent flags a potential lead, I decide if it's worth my time. When Chief of Staff synthesizes research, I validate the strategic direction.
It's less like "using AI" and more like managing a very fast, very literal team that never sleeps but also needs constant direction. The limit becomes how fast I the human can move.
The productivity gain isn't just speed—it's cognitive offloading. I'm not trying to remember where I saved that client insight from two months ago. Knowledge Hub has it. I'm not context-switching between writing an email and building a presentation. Different agents handle different outputs.
My brain is freed up to do the thing only I can do: make judgment calls about what actually matters.
But here's what keeps me up at night:
All of this—every project file, every client conversation, every workflow pattern I've built—lives inside Claude. What happens if Anthropic shuts down tomorrow? What if they change their pricing model and I can't afford it? What if there's a 23andMe situation where the company goes bankrupt and all my intellectual property becomes part of some asset sale?
I don't have a good answer. I'm choosing speed over certainty, betting that the productivity gains are worth the platform risk. But I'm not sleeping well about it. Every few weeks I export everything I can, but the context, the conversation history, the way the Knowledge Hub connects things—that's not exportable. That's trapped in the platform.
It's the booby trap nobody talks about when they're celebrating 5x workflows.
3. What This Means for Product Teams: Design ↔ Product Role Blending
Remember that Linear example from the opening? Let me walk you through what actually happened.
I had a planning document from a stakeholder meeting. Nothing fancy—just a Google Doc with a bunch of "we need to build X" statements. Normally, I'd spend Tuesday afternoon doing this:
Open Linear, scroll through 50+ existing tickets
Cross-reference the planning doc against what's already in the backlog
Identify gaps
Write new tickets with descriptions, dependencies, labels
Tag the right people
Update the planning doc with ticket links
Three hours, minimum. Longer if I got distracted or couldn't remember which ticket covered what.
Here's what I did instead:
Pasted the planning doc into Claude Code. Typed: "Pull the tickets from this document that are relevant, compare them with existing tickets in Linear, create a summary MD file showing me gaps to fill."
Claude generated the gap analysis in a markdown file. I reviewed it, added some context where the AI missed nuance, then typed: "Create these missing tickets in Linear with proper dependencies."
Five minutes to create. An hour to review.
But here's what's actually interesting about this:
That's not design work. That's product ownership work.
I was translating strategy documents into executable backlogs. Managing dependencies. Prioritizing features. Coordinating across systems.
The verification loop (reviewing the MD file before committing to Linear) meant I kept the judgment and strategy. AI handled the tedious cross-referencing and ticket-writing.
When you can move this fluidly between strategic planning, backlog management, and experience design—all accelerated by AI—the traditional boundaries between disciplines start to dissolve. You're not "just a designer" or "just a PM." You're operating across the full product stack.
AI gave me the capability to shape and drive product. And with that capability came a question: where's the next skill frontier?
4. The Center .md File (Your Source of Truth) - And the Hard Thing About Soft Skills
Every project I run now has a living .md file. It's like the project's brain—capturing decisions, patterns, constraints, things we tried that failed spectacularly.
When an AI agent makes a mistake, it goes in the .md as a rule. "Don't use corporate jargon in emails to this client." "This client gets lost with technical feasibility—frame suggestions as options, not recommendations." "CFO only reads bullet points, skip the narrative."
Boris Cherny's CLAUDE.md is technical rules for code. Mine is half technical, half emotional intelligence database.
But here's where it breaks down:
Last week I was on a call where a client said "yeah, that sounds fine" but their whole body language screamed hesitation. The half-second pause before "fine." The way they shifted in their seat. The fact that their CFO nodded but their Head of Product looked away.
After the call, I brain-dumped into the .md: "Client said fine but body language = hesitant. CFO on board, Head of Product skeptical. Tone for follow-up: confident but leaving room for pushback. Don't oversell—they need space to process."
Then I fed that context to Claude with instructions: "Draft follow-up email. Confident but not pushy. Acknowledge we moved fast. Give Head of Product an opening to raise concerns without losing momentum."
Claude wrote the email. It was... fine. Technically correct. But it didn't quite nail the vibe I was going for.
So I rewrote three sentences. Softened one paragraph. Added a specific question that would make the Head of Product feel heard.
Here's the question I can't answer yet:
How much of human EQ is teachable to AI versus forever dependent on me being the translator?
I can document patterns. "When CFO asks about ROI, they're skeptical—include cost-benefit framing." AI can learn that rule.
But can AI learn to read a room? To sense when someone's "yes" means "I need more time"? To know when silence means disagreement versus just processing?
The hard thing about soft skills is that they're fluid and intangible. They shift based on context, mood, history, trust. You can't reduce them to rules without losing what makes them work.
Right now, I'm still the translator. I take the human dynamics from the call and convert them into AI instructions. The .md file captures rules, but it can't capture the feeling of a conversation going sideways.
Maybe that's okay. Maybe that's the skill frontier AI revealed: the definable work (cross-referencing tickets, drafting emails, building presentations) can be systematized. The human work—reading what's unsaid, building trust over time, knowing when to push and when to wait—that's what actually matters now.
5. Verification Loops (Augmentation Over Automation)
And this leads to where Boris Cherny's workflow and mine diverge.
When Cherny's AI writes code, he verifies it by running tests. The tests either pass or fail. Binary. Clean. "Did the function return the expected output? Yes or no?"
When my AI drafts a client email, I can't just run a test. There's no "did this email make the client feel heard?" boolean.
So my verification loop looks different:
AI generates the first draft → I verify it aligns with strategy
AI proposes a design system → I check it against brand integrity
AI synthesizes research → I validate it against client context
AI writes stakeholder updates → I read for tone, not just accuracy
The loop isn't "AI does, human approves." It's "AI generates first drafts, human ensures quality and ethics."
The difference between code and people:
Code is deterministic. Same input, same output. People are... not. They change based on their day, their stress level, whether their kid is sick, whether they just got good news or bad news from their board.
AI can handle deterministic work at scale. It struggles with the adaptive, contextual, "read the room in real-time" work.
The opportunity is in the work that can't be defined. Not A/B testing email subject lines for the 50th time. But making the judgment call about whether this particular moment requires directness or space.
CLOSING
The tools to multiply human output by 5x are here. But here's what I'm learning: AI isn't handling the "busy work" so I can do the "real work." AI is doing real work—it's just doing the definable work. The work that can be articulated in a .md file, broken into steps, verified in loops.
What remains is the work that can't be defined: reading a room, sensing hesitation in someone's voice, knowing when to push and when to give space, building trust that takes years not minutes.
The 5x productivity didn't free me to generate more output. It freed me to be more human.
AI gave me back time. I'm spending it in rooms with people. Reading body language AI can't see. Building trust that can't be automated. Having the coffee conversations that don't go in the .md file but shape whether someone actually trusts me to own the product work.
And yeah, there's still that question that keeps me up at night: Who owns the work created inside AI environments? What happens when the platform changes or disappears?
For now, I'm choosing speed over certainty. I'm choosing to invest the time AI gave me back into the relationships that can't disappear with a platform shutdown.
The definable work lives in the cloud. The human work? For now that stays with me, until AI notetakers can discern and read people's reactions accurately.
Thu Do is a hands-on product consultant with 10+ years bringing products from 0-to-1 across startups, Fortune 500 consultancies (BCG, PwC), and innovation studios. She helps early-stage to early-growth companies ($1-10M ARR) turn big visions into competitive market-ready products and services through human-centered design, product alignment, and AI innovation. Find her on LinkedIn.
This article was co-created with Claude.
Photo by Aarón Blanco Tejedor
