What I Learned Mapping My AI Workflow: We Need AI Experience Architects

I spent tonight doing something that felt both exhilarating and humbling: mapping out all the AI agents and tools working alongside me in my consultancy. What started as a simple organizational exercise ended up revealing something fundamental about how humans and AI will work together—and what our role actually is.

The Exercise That Changed My Thinking

I created a visual framework showing how different AI agents handle different parts of my work: a Chief of Staff agent managing daily priorities, a Content Agent generating posts, a Writing Coach refining my voice, a Knowledge Hub organizing everything we learn. Seeing it all laid out made the possibilities feel real in a way abstract AI conversations never do.

But more importantly, it revealed three critical insights.

Insight 1: Knowledge Transfer Works Both Ways

From a human perspective, knowledge sharing is about passing on what we've learned to fill gaps for others. From a machine perspective, it's literally what makes multi-agent systems function.

Each agent needs to learn throughout the day, capture insights, and share them. The Writing Coach learns my style and passes samples to the Content Agent. The Note-taking agent feeds the Knowledge Hub. But who prioritizes all this information? Who decides what matters?

This reminded me of work I did years ago redesigning the knowledge hub for BCG—surfacing the right information at the right time could save thousands of consulting hours. The same principle applies here, but now it's machines that need access to centralized, living knowledge to work effectively together.

Insight 2: The Orchestrator Isn't Who I Thought

Everyone talks about "human as orchestrator" in AI systems. But mapping my workflow revealed another layer: I need a Chief of Staff AI to actually orchestrate the agents—collecting their outputs, managing traffic, prioritizing actions, synthesizing information.

If the Chief of Staff AI is the orchestrator, what am I?

I realized: I'm the architect.

Not in the technical sense of system architecture, but as an AI Experience Architect—someone who designs how humans and AI collaborate, who ensures human experience isn't forgotten in the rush to automate.

Insight 3: The Provocation We're Missing

Here's what worries me: What happens if we only build the technology without consideration for humanity and human experience?

We get powerful systems with no one thinking about:

  • How knowledge flows between agents and humans

  • Who's responsible for the quality of AI-human collaboration

  • Whether the experience makes us more human or less

  • What critical thinking, ethics, and taste look like in an AI-native world

We don't just need more AI architects building systems. We need AI Experience Architects who bring critical thinking, strategy, ethics, taste, and responsibility to how these systems work with people.

The Knowledge Hub as Living Platform

The most important piece in my framework isn't any single agent—it's the Knowledge Hub. Not just documentation of what happened, but a living platform for what's yet to come. It's where machine learning meets human wisdom. Where distributed agents (and distributed humans) share what they've learned to make better decisions tomorrow.

This is what makes the compound effect of AI possible—not just agents getting smarter, but the entire system learning together.

What This Means for You

If you're building with AI, I'd encourage you to map your own workflow. Not to optimize it—but to understand it. You might discover, like I did, that your role isn't what you thought.

You're not just using AI tools. You're architecting an experience. And that requires something only humans can bring: the critical thinking to question, the ethics to guide, the taste to judge, and the responsibility to own what we create together.

That's the work of an AI Experience Architect. And I think it's the most important role we're not talking about yet.