In Fritz Lang's 1927 film Metropolis, the prophet Maria declares: "The mediator between the head and the hands must be the heart." A century later, we're facing a similar question: As AI becomes the hands (execution), and capital remains the head (strategy), how do we value the heart — human judgment, empathy, and wisdom?
This isn't about how I should charge for my work. It's about how we (as a society, as companies, as workers) figure out how to measure and compensate human contributions when AI handles execution. It's an ethics question as much as an economics one.
If productivity gains don't flow to workers, we face mass displacement. We're already seeing this with rounds of layoffs. But if we can figure out how to price the intangibles (judgment, serendipitous creativity, lived experience, contextual wisdom), we create a path where humans do more meaningful work and get fairly compensated for it.
The 5-Hour → 30-Minute Problem
Here's a reality many of us are facing:
What used to take 5 hours now takes 30 minutes with AI. But we still price work in hourly rates whether in 40 hours work week or part time gig work.
So we're stuck:
Charge less (because it took less time) = lose income
Charge the same (but clients feel ripped off) = lose trust
Explain why 30 minutes is worth 5 hours = exhausting, and most don't believe it anyway
This happens because we still price work in hours. But hours don't capture value anymore. It never has.
The Measurement Problem
For decades, we've tried to fit humans into the machine's way of measuring: definite, countable, verifiable hours. We adopted timesheets, hourly rates, and productivity metrics designed for industrial output, treating human work as if it were widgets on an assembly line.
But humans don't create value the way machines do.
Machines deliver definite outputs — a toaster gives you toast every time, same temperature, same crispness (hopefully.) I'm not saying AI is a toaster, but at least for mainstream GenAI, we are still talking predictive models.
Humans deliver intangible outcomes — a chef knows when to improvise, when the tomatoes are too acidic so you need more sugar, when to break the recipe because the dinner guest just mentioned they're vegetarian. You can measure toast by the slice. How do you measure "I got to make sure this guest doesn't get heartburn later"?
We've always priced human work on what's definite, measurable, and expiring: time, effort, and presence.
Time is easy to measure:
Finite and countable
Visible and verifiable
Comparable across people
Value is hard to measure:
Often invisible (the expensive mistake that didn't happen)
Contextual (same advice worth different amounts to different companies)
Fluid (judgment, serendipitous creativity, lived experience, contextual wisdom)
So we defaulted to time as a proxy for value. Hourly rates. Salaries. Project scopes.
But AI breaks this assumption.
AI compresses labor-intensive work into quick iterations. Human judgment, serendipitous creativity, and lived experience are necessary to deliver better decisions, fewer mistakes, unexpected breakthroughs, not to mention saving the zillion gallons of water by not using AI repeatedly.
We're trying to measure intangible value with a definite metric (hours). And it doesn't work.
What AI has revealed is this: the only way to differentiate and value human contributions is to embrace the intangible. Not try to force humans into machine metrics, but measure what makes us fundamentally different.
We need to get better at measuring intangibles. Not just for business reasons, but for ethical ones.
Because if we can't measure them, we can't compensate them fairly.
If I've learned anything from being in a design leadership position, it's that we need to have this conversation, speaking the language of business ROI. This is an attempt to frame the future of work options in ways that capital and people can both understand, so we can explore them together.
The Mental Healthcare Paradox
Consider this interesting use case from a mental health AI startup:
Patients open up to AI therapists faster (lower barrier, no judgment)
But won't take the AI's advice when it comes to life decisions (no lived experience = no credibility)
AI generates first drafts. Humans provide the judgment that matters.
In a world where AI handles execution, the real question isn't "how do I do this faster?"
It's "how do we get paid for seeing what needs to be done by offering a piece of being human?"
Why This Matters Now
As companies add AI to every process and workflow, compressing full-time jobs into part-time hours, we face three ethical imperatives:
1. Compensate human contributions differently and fairly
If we're 10x more productive, we shouldn't earn 10% of what we used to. The value created didn't disappear; we need new ways to measure and capture it. AI productivity gains shouldn't only flow to capital.
And we're already seeing the market shift: what used to be $100 per hour is now $70 per hour. It's a hiring manager's market, but it also feels like companies are using this moment to reset rate expectations downward.
2. Create new work
If AI handles execution, what's the new human work? Not busywork, but meaningful contribution that uses judgment, serendipitous creativity, lived experience.
3. Build social safety nets
Not only to catch those whose work is genuinely displaced, but to redirect their talents toward public goods problems that benefit generations. Remember Roosevelt's New Deal programs in the 1930s? UBI, retraining programs, infrastructure investment.
If we solve this, we create new types of work that value what humans uniquely bring, addressing displacement through economic opportunity, not just safety nets.
The Intangibles Economy: What Makes Humans Valuable Now?
Here are the four core intangible human values that I can think of in an AI economy:
1. Impact of Ideas in Action. AI generates ideas at scale, but humans know which ideas matter. The value is the delta between "100 ideas" and "the 3 that work for us in this market" — taste, judgment, strategic prioritization.
2. Expensive Mistakes Avoided. Specialists optimize their function, but cross-functional thinkers see dependencies and catch what didn't happen because someone spotted it early. Example: "I spent 1 hour to save my team 20 building the wrong thing" — risk sensing, systems thinking, scar tissue.
3. Opportunities Opened (Ideation → Action). AI optimizes existing paths, but humans see new paths and the revenue or impact that didn't exist before. This is creative synthesis, lateral thinking, possibility sensing.
4. Pattern Recognition By Lived Experience. Like the therapist, credibility comes from "I've been through this" — not just data pattern recognition (AI does that), but contextual pattern recognition: knowing what works in this situation with these people. This is wisdom, contextual judgment, emotional intelligence.
The challenge: These intangibles are real and valuable. But how do we measure them? And how do we price them?
How Do We Price Intangibles?
Different pricing models are attempts to capture intangible value. None are perfect. Here's how they compare:
Ways of Work Menu
Key observations:
Time-based models (hourly, project-based) struggle to capture intangible value
Full-time salary works when company culture recognizes and rewards intangibles. It's worth noting that full-time salary is a bit like paying someone a monthly gym membership — you're betting that the compound value over time (institutional knowledge, deep relationships, knowing where all the bodies are buried) justifies the ongoing cost even when they're not actively "producing" every single hour. The problem is when companies treat it like a gym membership they forget they have.
Hybrid models score highest because they capture both access to judgment AND outcomes delivered, but they're complex to structure and require sophisticated buyers on both sides
We're still experimenting; even hybrid models don't perfectly capture all four intangible values in every context
What People Are Experimenting With
X-shaped people (those who work across Product, Design, Tech, and Go-to-Market) are navigating this shift in real-time. Their value is especially intangible: they prevent expensive mistakes and open opportunities across functions without fitting into a single OKR or function. Here's what I've seen people doing and experimenting with:
Fractional CPO (Retainer). $X/month for unlimited access. Pricing judgment, pattern recognition, risk mitigation — not hours but outcomes avoided and opened.
Speed to Validated Learning. Concept to tested prototype in 3 weeks for $X. Charging for learning velocity: you know what to build next, not just that you built something.
Equity + Project. MVP for $X + Y% equity or Z% in shared revenue. Shares risk: if it works, we both win. Values long-term pattern recognition.
Productized Knowledge. Monthly/annual subscriptions for frameworks, templates, communities. Continuous learning and pattern recognition at scale.
Full-Time at the Right Company. Choosing companies that value cross-functional thinking, offer ownership of problems, and recognize "expensive mistakes avoided."
The insight: X-shaped people face the pricing question acutely because their value is intangible. If they can figure out how to price this, it points the way for everyone else.
The Bigger Question
We're in the middle of a shift from measuring definite inputs (hours, tasks) to valuing intangible outputs (judgment, opportunities unlocked, mistakes avoided).
Different models will work for different contexts. Full-time salary works when the culture rewards intangibles. Retainers work when clients trust judgment over deliverables. Outcome-based works when success is definable. Hybrid works when both sides want shared risk.
But the real question isn't which model is "right."
The question is: "What new opportunities will I gain or expensive mistakes will I avoid by having you in the room?"
Figuring out how to price that is the conversation we all need to have.
Let's Figure This Out Together
How are you navigating this shift in your work? What models are you experimenting with?
This is the conversation we need to have. DM me, let's figure it out together.
This is Part 4 of an ongoing series on X-shaped people and the future of creative teams. Read Part 1: The X-Shaped Individual: Solving for Problems in 3D and Part 2: It's Only Through Doing That You Become: How X-shaped people are made — and how teams can grow them and Part 3: Your Design System Isn't a Style Guide Anymore — It's AI Infrastructure
Thu Do is a hands-on product owner with 10+ years bringing products from 0-to-1 across startups, Fortune 500 consultancies (BCG, PwC), and innovation studios. She helps early-stage to early-growth companies ($1-10M ARR) and innovation teams turn big visions into competitive market-ready products and services through human-centered design, product alignment, and AI innovation. This article originally appeared on Thu's Tech Dialect. Find her on LinkedIn.
Co-created with Claude
