Humaie
Thought Leadership Whitepaper  ·  2026

Designing the Intelligent Enterprise

From hierarchy to capability — how agentic AI is obsoleting the org chart, what the new paradigm of human–agent collaboration looks like, and where it leads.

Cameron Smith
COO & Co-Founder, Humaie
AI was used in the creation of this white paper as a research tool, critical thinking partner, and for spelling and grammar assistance.
01 — The Problem

The Org Chart Is a 2,000-Year-Old Information Technology

In 52 BC, the Roman Army solved a problem that every organisation still faces today: how do you coordinate thousands of people when no single leader can oversee them all? Their answer was elegant and brutal — a nested hierarchy with a consistent span of control at every level. Eight soldiers, one decanus. Ten of those, a centurion. Six centuries, a cohort. Ten cohorts, a legion of five thousand. The structure — 8, 80, 480, 5,000 — was not a management philosophy. It was an information routing protocol, built around one fixed human constraint: a leader can effectively oversee somewhere between three and eight people.

More precisely: it was a communication architecture. A structured channel through which information flows up, decisions flow down, and coordination happens across an organisation too large for any one person to see whole. Every management layer is a relay station. Every span-of-control limit is a bandwidth constraint. What we call "management" is, at its core, the organised movement of information between people who cannot all talk to each other directly.

Two millennia later, that same constraint still governs your organisation. The Prussian military formalised it as the General Staff — professionals whose entire purpose was to pre-compute decisions and route information up and down command layers. Daniel McCallum drew the world's first org chart in 1855, for the New York and Erie Railroad, after train collisions made clear that informal management at scale cost lives. Frederick Taylor optimised the work happening within that structure. McKinsey helped Shell and GE globalise it. Every experiment to escape it — from Spotify's squad model to Zappos' Holacracy — eventually collapsed back into hierarchy at scale, not because these were bad ideas, but because no alternative coordination technology was powerful enough to replace what hierarchy actually does.

What hierarchy actually does is carry information. It routes context from the front line to the decision-maker and decisions back again. Every management layer is, at its core, a relay station. Add more layers to coordinate more people, and information slows. Remove layers, and coordination breaks. For two thousand years, this has been the fundamental tradeoff at the heart of organisational design — and nobody has broken it.

The Roman Span-of-Control Model — Origin of the Corporate Hierarchy
LEGATE · 5,000 COHORT · 480 COHORT · 480 COHORT · 480 CENTURION · 80 CENTURION · 80 CENTURION · 80 8 soldiers 8 soldiers 8 soldiers 8 soldiers Span of control: 3–8 people per layer Same logic governs every org chart on earth today.

Until now.

Most companies using AI today are giving everyone a copilot, which makes the existing structure work slightly better without changing it. We're after something different — a company built as an intelligence.

Jack Dorsey & Roelof Botha, Block / Sequoia Capital, March 2026

The shift that is underway is not incremental. AI is not a better relay station — it is a different answer to the same question the Romans were asking. For the first time in history, a system can maintain a continuously updated model of an entire business and use it to coordinate work in ways that previously required humans relaying information through layers of management. The hierarchy was not a design choice. It was the best available technology. That technology has now been superseded.

80%
of companies using gen AI report no material contribution to earnings (McKinsey, 2025)
86%
of senior leaders say their organisation is not prepared to adapt AI into day-to-day operations (McKinsey State of Organisations 2026, survey of 10,000+ executives across 15 countries)
6%
of companies are true "high performers" achieving meaningful earnings impact from AI (McKinsey, 2025)
The Gen AI Paradox — Adoption vs. Impact
0% 50% 100% 88% Using gen AI 80% No earnings impact 6% High performers 86% Not AI-ready Source: McKinsey State of Organisations 2026; McKinsey Seizing the Agentic AI Advantage 2025

The reason most organisations are failing to capture value from AI is not that they are choosing the wrong tools. It is that they are bolting a new technology onto an old structure. Adding AI to a hierarchy does not change what the hierarchy is — it just makes the relay stations slightly faster. The organisations that will compound advantage over the next decade are those willing to ask a different question: not "how do we use AI?" but "what should we be organised around in a world where AI handles coordination?"


02 — The Foundation

Composable Business Architecture: Organise Around Capabilities, Not Functions

If AI can handle the coordination that justified hierarchy, then organisations need a new principle of design. That principle is composable business architecture — the idea that an organisation can and should be decomposed into discrete, modular capabilities: the ability to acquire customers, manage risk, develop talent, deliver products, generate market intelligence. Not departments. Not functions. Capabilities — and the distinction matters enormously.

To be precise about what is being criticised here: departments were not a flawed concept. They were a structurally inevitable consequence of operating under information scarcity. Grouping similar skills under a single leader was a rational solution to the coordination problem of the era — it minimised the cost of routing information through a hierarchy that could not move fast enough to serve cross-functional work. The failure was not in the intention. It was in the physics. Under information scarcity, functional structures structurally cannot organise around outcomes — because delivering any meaningful outcome requires coordinating across multiple functions simultaneously, and that coordination was precisely what the hierarchy was too slow to do cheaply. Departments optimised inward not because leaders chose badly, but because the structure gave them no other option. AI removes that constraint. The coordination cost that made cross-functional working prohibitively expensive collapses. And with it, the structural logic for organising by function collapses too.

Composable business architecture treats the capability, not the function, as the stable unit of organisational design. People and agents flow to capabilities based on need. The capability itself persists. A traditionally structured organisation asks: "who does finance?" Composable business architecture asks: "what does our financial management capability need to produce, and who or what is best placed to operate it?" The answer might be a human team, an AI-augmented team, or — in well-bounded, well-validated areas — agents operating largely autonomously under human governance. The question is always about the output of the capability, never about the structure built around it.

Gartner, McKinsey, BCG, and Forrester have all arrived at different versions of this insight from different starting points. McKinsey's agile transformation research shows that organisations redesigned around capabilities achieve a five- to tenfold increase in speed of change and decision-making, alongside a ten to thirty percent improvement in both customer satisfaction and employee engagement (McKinsey, "A New Operating Model for a New World," 2025). BCG data shows a two- to fourfold reduction in time to market from platform capability models (BCG, "Platform Operating Models," 2023). The convergence across strategy firms is striking — and it reflects a genuine structural shift in how leading organisations are choosing to think about themselves.

Function-Centric vs. Capability-Centric Design
FUNCTION-CENTRIC CEO Finance Marketing Operations Optimise within silo Optimise within silo Optimise within silo Who owns cross-functional outcomes? COMPOSABLE Customer Acquisition Differentiating · Own & deepen Risk Management Differentiating · Own & deepen Financial Operations Enabling · Augment with AI Talent Development Enabling · Augment with AI Payroll Processing Commodity · Source externally IT Infrastructure Commodity · Source externally ↑ Intelligence layer composes capabilities into outcomes

What composable business architecture means in practice

The concept is deceptively simple, and easily confused with adjacent ideas. In composable business architecture, a capability is not a process — it does not describe how work gets done. It is not a function — it does not describe who does the work. It is not a project — it does not have a start and end date. A capability is an organisation's ability to produce a specific outcome, regardless of how that outcome is achieved or by whom.

Three characteristics define a genuine capability. First, it is outcome-oriented: "customer relationship management" is a capability; "the CRM system" and "the sales team" are means of delivering it. Second, it is relatively stable over time: while the processes, technologies, and people involved in delivering a capability may change dramatically, the capability itself — the need to acquire customers, manage risk, develop talent — persists across decades. Third, it is cross-functional by nature: a single capability typically draws on multiple departments, systems, and skills, which is precisely why functional structures are so poorly suited to managing it clearly.

Capability vs. Function: The Critical Distinction

A pharmaceutical company has a "regulatory affairs" function and a "market access" function. But what it actually needs is a "product approval" capability — the ability to move a drug from clinical trial to commercial availability in a given market. That capability draws on regulatory, legal, medical affairs, commercial, and market access simultaneously. Organising around the function means those teams optimise within their silo. Organising around the capability means asking: what does "product approval" actually need to produce, how well are we producing it, and what combination of human expertise and AI-assisted process management would produce it best?

Three tiers: not all capabilities are equal

The most important insight in capability-based design is that capabilities are not equal in strategic value — and conflating them is one of the most expensive mistakes large organisations make. The evidence consistently points to three distinct tiers, each requiring a fundamentally different investment and management approach.

01

Differentiating Capabilities

The capabilities that generate competitive advantage and that are genuinely hard to replicate. These are your moats. They deepen every day your organisation operates them — through proprietary data, accumulated expertise, network effects, or deep customer relationships. They should be owned, invested in, and never outsourced. In an agentic enterprise, these are the capabilities where human judgment is most irreplaceable and where AI's role is to augment and accelerate, not replace.

02

Enabling Capabilities

Capabilities that are vital to operating effectively but where being "on par with the market" is the right aspiration. You need them to be good, but excelling at them does not generate sustainable competitive advantage. Think: the ability to attract and retain the right people, to keep financial operations accurate and compliant, to manage legal and regulatory exposure, to procure at competitive cost. These are strong candidates for AI augmentation — where agents handle high-volume, well-defined work and humans focus on judgment and exceptions. A useful test: if a well-run competitor could deliver this capability at roughly the same standard, it belongs here, not in Tier One.

03

Commodity Capabilities

Capabilities so standardised that any organisation can deliver them interchangeably. The ability to process payroll. To file standard compliance reports. To provision and maintain basic technology infrastructure. To handle routine administrative transactions. These are strong candidates for full automation, outsourcing, or agent-led operations. Investing in excellence here delivers no strategic return — the question is only: how do we operate this at minimum viable cost and maximum reliability?

One important nuance: these tiers are a diagnostic tool, not a permanent taxonomy. A capability that sits in Tier Two today can migrate. Data infrastructure was a Tier Two enabling capability for most organisations a decade ago. For organisations that have built proprietary data compounds — Block's transaction intelligence, Amazon's logistics network, JPMorgan's fraud detection — it has become Tier One. AI accelerates this migration in both directions: making some capabilities ubiquitous and cheap (pushing them toward Tier Three), while making others increasingly rare and valuable (pulling them toward Tier One). The strategic question is not only where each capability sits today, but which direction it is moving — and whether you are investing accordingly.

The practical implication is direct: your investment, your talent, and your senior leadership attention should be disproportionately concentrated on Tier One. Most organisations distribute investment roughly proportionally across all capabilities — which means they are materially under-investing in the capabilities that actually make them distinctive, and over-investing in infrastructure that could be run by agents or outsourced partners. McKinsey's capability research shows that savings from rationalising commodity and enabling capabilities typically range from ten to twenty percent of operational costs — resources that can be redirected to building genuine strategic depth.

The emerging capability marketplace: should you build, augment, or source?

The three-tier classification raises a more disruptive question than it first appears. For decades, the make-vs-buy decision has governed outsourcing strategy: you could keep a capability in-house or contract it to a human workforce elsewhere. What is now changing is the nature of what can be sourced externally. We are moving from outsourcing labour and process to the ability to source capability itself — the intelligence, judgment, and execution within a defined domain — delivered by external AI agents, consumed on demand, and priced per outcome.

This is not simply a faster form of outsourcing. It is categorically different. When you outsource payroll to ADP today, you are buying a managed service that still relies on humans, systems, and relationship overhead. When you source a capability from an external agent network, you are deploying machine intelligence that operates within your architecture, conforms to your governance parameters, and can be swapped, upgraded, or replaced without the friction of employment, contract renegotiation, or knowledge transfer. The capability slot in your architecture remains yours. What fills it can change.

Three models are emerging that leaders should be watching closely:

Capability-as-a-Service. Standardised capability domains — financial reconciliation, compliance monitoring, customer support, contract review, market intelligence — delivered by specialist AI providers and consumed as a subscription. Primitive versions already exist in SaaS: Rippling for HR, Tipalti for accounts payable, Intercom for customer interaction. The trajectory is toward entire capability layers being externally delivered by agent-native providers, with far greater scope, intelligence, and adaptability than today's workflow software.

Agent marketplaces. Open markets in which organisations procure and deploy specialist agents for specific capability needs — analogous to hiring a contractor, but for a machine. Salesforce's AgentForce, ServiceNow's AI agents, and emerging platforms from major cloud providers are early expressions of this model. More significantly, the Model Context Protocol (MCP) and agent-to-agent communication standards are creating the infrastructure for agents to themselves procure sub-agent services — your organisation's orchestration layer selecting the best available external agent for a capability, monitoring its performance, and replacing it when something better emerges.

Industry-level shared capability pools. The most forward-looking model: multiple organisations co-accessing commodity capabilities through shared intelligence infrastructure. No individual bank builds its own fraud detection from scratch; increasingly, they consume it from shared networks. SWIFT is a primitive version for financial messaging. What's coming is shared regulatory compliance agents, shared market intelligence layers, shared supply chain visibility capabilities — functions that are genuinely commodity across an industry and where no single player gains competitive advantage from building them alone.

This reframes the strategic question for every capability tier. It is no longer simply "how efficiently do we operate this?" For Commodity and many Enabling capabilities, the question is more fundamental: should we operate this at all?

Most Human Control
Human-in-the-Loop
Agent pauses; human must approve. For high-stakes, irreversible, ethically sensitive decisions.
Balanced
Human-on-the-Loop
Agent acts; human monitors and can intervene. For medium-stakes, reversible, high-volume work.
Most Agent Autonomy
Autonomous Within Bounds
Agent acts fully within governed parameters with audit logs. For validated, low-risk, high-frequency processes.
Capability tier Ownership approach Agent strategy Sourcing direction
Differentiating Own and deepen — never outsource Augment human judgment; AI accelerates but does not replace Build exclusively; proprietary advantage compounds here
Enabling Operate well; on-par is sufficient Automate high-volume work; humans own exceptions and judgment Hybrid: internal for sensitive functions; external agents for standard work
Commodity Minimise ownership cost Agent-led or fully automated within governed parameters Source externally — via Capability-as-a-Service, agent marketplace, or shared industry pool

There is also a strategic argument for composable business architecture that goes beyond internal efficiency. An organisation structured around function-centric hierarchies will find it structurally difficult to plug in external capability providers even when it wants to — because the capability is embedded in a department, a team, a set of management relationships and institutional knowledge that cannot easily be lifted out. The composable organisation, by contrast, has defined its capabilities as discrete, bounded, interoperable units. It can swap in an external agent capability the same way it would swap in a software component. The composable design is not just internally beneficial — it is what makes you interoperable with the emerging market for external capabilities.

The Governance Implication

If you can source capabilities externally from AI agent providers, your governance framework must extend beyond your own agents. "Big G" standards — ethical principles, data handling, audit access, decision transparency — cannot stop at your organisational boundary. Future capability contracts will need to govern not just software uptime and cost, but agent judgment within defined parameters: how decisions are made, what data is used, how errors are escalated, and what human override rights are preserved. This is genuinely new commercial and legal territory. The organisations designing these frameworks now will have a significant advantage as the capability marketplace matures.

How to apply composable business architecture: mapping your capabilities

The Five-Step Capability Mapping Process
01 Name what you do 02 Decompose into sub-capabilities 03 Classify by strategic tier 04 Rate performance & AI readiness 05 Decide sourcing model

Capability mapping is not an IT exercise — though it is often wrongly delegated to IT. It is a leadership conversation, and it produces the most valuable output when it involves senior business leaders who understand what the organisation actually does, not just how it is organised on paper. The process has five steps.

Step one: Name what you do, not how you are structured. Gather your senior team and ask: "What are all the distinct things this organisation needs to be able to do to deliver on its strategy?" List them as nouns — Customer Acquisition, Risk Assessment, Product Development, Supplier Management, Talent Development. Avoid verbs and processes. At this stage, aim for twelve to twenty top-level capabilities. Do not try to make this list match your org chart — the whole point is that it will not.

Step two: Decompose into sub-capabilities. Each top-level capability can typically be broken into three to five sub-capabilities that together constitute it. "Customer Acquisition" might decompose into: Market Intelligence, Brand & Positioning, Demand Generation, Sales Conversion, and Partner Channel Management. Each sub-capability should be mutually exclusive (no overlap) and collectively exhaustive (together they constitute the parent capability). This creates a two- or three-level hierarchy that gives genuine analytical granularity.

Step three: Classify each capability by strategic tier. For each capability — differentiating, enabling, or commodity — ask three questions: Does excelling at this create competitive advantage? Could a competitor easily replicate this if they invested? Does our ability to deliver this improve over time through accumulated experience and data? Differentiating capabilities answer yes, no, and yes respectively. Commodity capabilities answer no to all three. This classification will generate productive disagreement — which is the point. It surfaces implicit assumptions about where value actually comes from in your business.

Step four: Assess current performance and AI readiness. For each capability, honestly rate your current performance (leading, on par, lagging) and your AI readiness — the degree to which the work within this capability is well-defined, data-rich, and bounded enough to be partially or substantially operated by agents. This produces the human-agent allocation surface: a specific, actionable map of where agents operate and where humans remain essential.

Step five: Decide the sourcing model. For each Commodity and Enabling capability, ask explicitly: should we build, augment, or source this externally? The capability marketplace is not fully mature — but it is developing fast enough that the sourcing question should be part of every capability planning cycle today. The capabilities you decide to source externally should be designed with clean interfaces and clear governance parameters so they can be plugged in when the right provider exists.

The Block Model — One Architecture, One Illustration

Block offers one of the most publicly documented examples of composable business architecture in practice. It has decomposed its business into "atomic financial primitives" — payments, lending, card issuance, banking, payroll — which are not products but capabilities, each without a user interface of its own. An intelligence layer then composes these capabilities into solutions for specific customers at specific moments. When the intelligence layer cannot compose a solution because a capability does not exist, that failure signal becomes the roadmap. Critically, Block's model also illustrates the sourcing question: its "atomic primitives" are its differentiating tier, owned and deepened internally. But the interfaces — Square, Cash App, Afterpay — sit above them as delivery surfaces that could in principle be reconfigured as the market evolves. Block's architecture is instructive as one implementation of this logic; the principles apply across professional services, manufacturing, and healthcare through different paths.

What the map reveals that the org chart never could

Done well, a capability map exposes three things that a functional org chart systematically conceals. First, it reveals duplication: the same capability being operated independently by multiple business units, with no shared learning and duplicated cost. Large organisations routinely discover five or six separate "customer insight" capabilities operating in silos when they map honestly for the first time. McKinsey research shows that rationalising this duplication typically saves fifteen to twenty percent of operational costs — resources that can be redirected to building genuine strategic depth.

Second, it reveals strategic misalignment: investment and headcount concentrated in enabling and commodity capabilities while differentiating capabilities are chronically under-resourced. The org chart shows you who reports to whom. The capability map shows you whether you are investing in the right things.

Third, and most directly relevant for the agentic era, it reveals the human-agent allocation surface: the specific capabilities where the work is sufficiently well-defined, data-rich, and bounded to be partially operated by agents — and the specific capabilities where human judgment remains irreplaceable. This is the bridge between strategy and deployment. Not "where can AI help?" — a question that produces pilots scattered across the organisation — but: "within our Customer Acquisition capability, which sub-capabilities have sufficient data quality and process clarity for agent-led operation, and which require the connective labor, contextual judgment, and relational trust that only humans can provide?" Answered rigorously at the capability level, composable business architecture produces a genuinely actionable AI roadmap, with clear human-agent boundaries built in from the start.

Composable business architecture is, in this sense, the structural foundation of the intelligent enterprise — and the capability marketplace makes it strategically urgent. Without it, AI deployment is an experiment and external capability sourcing is structurally impossible. With it, every capability is a deliberate design decision: owned and deepened where it creates advantage, augmented where AI adds leverage, and sourced externally where the market can deliver it better and cheaper than you can build it yourself. The transition from Horizon One to Horizon Three (explored in Section 06) is precisely this journey — executed one capability at a time.


03 — The New Workforce

Agentic AI: A New Class of Worker in Every Capability

The term "AI agent" covers a lot of territory, and most of it has been overstated. But the underlying shift is real and structural. There is a categorical difference between AI that assists a human worker — a copilot, a search tool, a drafting aid — and AI that can plan, act, learn, and self-correct across a multi-step workflow with minimal human intervention. The latter is what "agentic AI" means. And it is arriving faster than most senior leaders have accounted for.

Gartner's research shows that task-specific AI agents were embedded in fewer than five percent of enterprise applications at the start of 2025. By the end of 2026, that figure is projected to reach forty percent. By 2028, fifteen percent of routine work decisions are expected to be made autonomously by AI agents. McKinsey identifies three levels of AI deployment — bolt-on tools that improve individual productivity by five to ten percent; integrated agents that save twenty to forty percent of team time; and reimagined processes where agents handle sixty to eighty percent of common operations autonomously. The value gap between level one and level three is not linear — it is an order of magnitude.

Gartner's Agentic AI Adoption Trajectory — Enterprise App Integration
0% 25% 50% 75% <5% 40% 50%+ 68% 2025 2026 2027 2028 Source: Gartner 2025; Cisco 2025 — % enterprise apps with task-specific AI agents

The early evidence at scale is striking. JPMorgan Chase now has over two hundred thousand employees using its proprietary AI suite daily, with four hundred and fifty AI use cases in production. Their AI investment is projected to deliver between one and two billion dollars in measurable business value. Block's internal AI agent handles an estimated ninety percent of code output per engineer; production output per engineer has risen over forty percent. Amazon has deployed millions of robots with agentic layers and has built thousands of agents across its organisations in the past year alone.

The Medvi Case — April 2026

Matthew Gallagher launched Medvi, a GLP-1 telehealth business, from his Los Angeles home in September 2024 with $20,000. In its first full year, Medvi reached $401 million in self-reported revenue — with a net profit margin of 16.2%, nearly three times that of its publicly traded competitor Hims & Hers, which employs over 2,400 people. Gallagher and his brother are the only full-time employees. But the lesson is more precise than the headline: Gallagher outsourced the entire regulated medical stack — physicians, prescriptions, pharmacy fulfilment, compliance — to specialist partners with their own workforces. He retained ownership of one thing: the customer-facing intelligence layer. This is agentic leverage applied to a single differentiating capability. It is not a blueprint for every industry — but as a proof of the economics possible, it is extraordinary. (Note: In March 2026, the FDA issued warning letters to Medvi and similar telehealth companies regarding compounded GLP-1 drug safety. The revenue figure is unaudited.)

But the honest picture is also cautionary. Gartner predicts that more than forty percent of agentic AI projects will be cancelled by the end of 2027 — not because the technology failed, but because organisations deployed it without redesigning for it. Block employees themselves report that roughly ninety-five percent of AI-generated code still requires human modification. Klarna famously declared that AI had replaced seven hundred employees — then watched customer satisfaction fall and quietly began rehiring. The technology is real; the implementation challenge is just as real.

The organisations that will extract durable value are not those that deploy the most agents. They are those that deploy agents against the right capabilities — the ones with sufficient data, clear success criteria, and well-defined boundaries between what agents handle and what humans own.


04 — The New Paradigm

Managing Humans and Agents Together: The New Operating Model

The question most senior leaders are now sitting with is not whether to deploy AI agents. It is how to manage a team that includes them. This is genuinely new territory — and the management instincts that served leaders well in traditional functional hierarchies will actively mislead them here.

BCG puts the central tension plainly: agentic AI is neither a tool nor a colleague — it is both simultaneously. Managing it purely as a tool produces under-investment in governance and oversight. Managing it purely as a worker produces category errors in accountability and trust. What is required is a third frame: the agent as a new kind of organisational actor, with defined roles, measurable outcomes, clear authority, and structured human oversight.

The emerging operating model for human-agent teams has three defining features.

1. Role Architecture: Clarity About Who Does What

The most important management decision in a human-agent team is not which agents to deploy — it is the architecture of roles that defines what humans own, what agents own, and where the handoffs occur. This requires being explicit about four distinct types of work:

Execution Work

Agents own this

Well-defined, high-volume, pattern-based tasks with clear success criteria. Research synthesis, data analysis, first-draft generation, scheduling, routing, monitoring, reporting. Human role: set parameters, review outputs, escalate exceptions.

Judgment Work

Humans own this

Novel situations, ethical decisions, high-stakes calls where the cost of being wrong is existential. Relationship management, strategic direction, value trade-offs, accountability to stakeholders. This is not residual — it is the core of what humans are for.

Connective Work

Humans own this — and must protect it

What sociologist Allison Pugh calls "connective labor" — the work of empathy, of truly seeing another person, of mentoring, motivating, and building trust. No agent can do this. No efficiency metric will show you when it's being eroded — but its absence will hollows out a team from the inside.

Orchestration Work

The new human meta-skill

Defining objectives for agents, structuring workflows, reviewing and challenging outputs, synthesising divergent agent-generated ideas into coherent direction. This is what management becomes when coordination is handled by the system, not the org chart.

2. New Roles for a New Structure

From Traditional Management to the Human-Agent Operating Model
TRADITIONAL Senior Manager Middle Mgmt Middle Mgmt Middle Mgmt IC · IC · IC Information routing = primary job NEW MODEL Player-Coach Craft + people growth DRI · Outcome owner (90 days) DRI · Outcome owner (90 days) IC + Agents IC + Agents IC + Agents World model replaces information routing

Three role archetypes are emerging across the organisations making this transition most effectively. Block's own model is the clearest public articulation, but the pattern is appearing independently across sectors.

The Individual Contributor (IC) as agent orchestrator. Deep specialists whose work context is now provided by the system rather than by a manager. They operate within a capability domain, making decisions without waiting for information to travel up and down a chain of command — because that information is already available in the system's world model. Critically, every IC in this model is now also a manager: defining tasks for agents, setting parameters, reviewing outputs, and providing iterative feedback. This democratises management skills that previously existed only in formal leadership roles.

The Directly Responsible Individual (DRI). A temporary role that owns a specific cross-cutting problem or outcome — not a function, not a department, but a result. The DRI has authority to pull resources across capability teams for the duration of the problem. When the problem is resolved, the DRI role dissolves or moves. This replaces the permanent middle management layer that existed primarily to coordinate across functions — work that the system now does.

The Player-Coach. This is the evolved version of the manager. They still build — still do the craft work. They also invest in the growth of the people around them. Crucially, they are not information routers or status-meeting conveners. The system handles alignment. The DRI handles strategy. The player-coach handles the connective work: mentorship, psychological safety, the quiet act of making someone feel seen and developed. This is not a diminished role — it is a more human one.

Connective labor is not just a task, nor is it reducible to measurable ends. It is an inalienable part of humanity itself, without which the social fabric is left threadbare and thin.

Jenny L. Davis & Hayoung Seo on Allison Pugh's "The Last Human Job," Public Books, 2024

3. Governance: Big G and Little g

Governing autonomous agents at scale is fundamentally different from governing tools. When an agent acts, it acts on behalf of the organisation. Its decisions carry organisational accountability. The governance model that works — and the only one that simultaneously enables speed and manages risk — is what practitioners are calling "Big G, little g."

"Big G" is centralised: ethical principles, data security standards, compliance requirements, risk thresholds, audit trails, and the right of humans to override any agent decision at any time. These are set once, at the enterprise level, by a cross-functional body that includes legal, risk, HR, and operations. They are non-negotiable.

"Little g" is decentralised: the day-to-day empowerment of teams to deploy, configure, and manage agents within their capability domain, within the Big G guardrails. Teams don't wait for central approval to iterate on their agent workflows. They operate within the boundaries — and they own the outcomes.

Oversight model How it works When appropriate
Human-in-the-loop Agent pauses; human must approve before continuing High-stakes, irreversible, or ethically sensitive actions (financial commitments, legal instruments, sensitive data)
Human-on-the-loop Agent acts; human monitors and can intervene post-hoc Medium-stakes, reversible, high-volume work where speed matters and reliability data exists
Autonomous within bounds Agent acts fully; governed by parameters and audit logs Well-validated, low-risk, high-frequency processes with clear success criteria

The critical caution here is automation bias — the human tendency to defer to confident-sounding AI outputs without scrutiny, even when we have information suggesting the output may be wrong. Research shows that clinicians provided with erroneous AI recommendations over-rode their own correct judgments in a meaningful proportion of cases. The "human-on-the-loop" model only works if humans are actively, sceptically reviewing — not passively monitoring. Building a culture where people feel not just permitted but expected to challenge agent outputs is as important as any governance framework.

There is a subtler governance risk that formal frameworks rarely capture. Pugh's research on connective labor points to it: the accountability that once held organisations together was not only procedural — it was relational. A manager who knew their team, understood their context, and would be held personally accountable for outcomes was also a human anchor for organisational integrity. When coordination moves into a system and oversight becomes a monitoring dashboard, that relational accountability can quietly drain away. Audit trails are not a substitute for the human judgment and ethical commitment that governance frameworks are ultimately trying to protect. Big G and little g must be designed with this in mind — not as bureaucratic controls bolted onto an agentic system, but as an expression of the values that humans in the organisation have chosen to uphold.


05 — The Human Challenge

The Hardest Part Is Not the Technology

Every organisation that has attempted this transformation at scale reports the same thing: the technical components are the easy part. The hard part is human.

Allison Pugh, professor of sociology at Johns Hopkins University, spent years interviewing over a hundred people in high-touch professions — doctors, teachers, therapists, chaplains, coaches, hairdressers — about what their work actually involved at its most essential. What she found, documented in The Last Human Job (Princeton University Press, 2024), is that what makes these roles irreplaceable is not their technical expertise. It is what she calls "connective labor": the act of seeing another person fully and reflecting that understanding back to them — a mutual recognition of humanity that is the mechanism by which trust is built, change is enabled, and care is delivered.

Pugh's warning is not simply that AI cannot do this work. It is that the same efficiency logic driving AI adoption has been eroding connective labor for decades — through standardised processes, metric-obsessed management, and the relentless compression of time for genuine human interaction. The agentic enterprise risks making this worse, not better: deploying agents to maximise throughput while inadvertently designing out the human moments of recognition and mentorship that make organisations worth belonging to. As Pugh observes, when connective labor is pushed to its limit by efficiency pressures, AI bots can appear to be better than exhausted human workers — but the solution is not to replace the human. It is to restore the conditions under which the human can do what only they can do.

The Dual Threat

Pugh identifies two simultaneous threats to what makes human work meaningful: the advancement of AI automation, and profit-driven industrial logic that imposes metrics and standardisation on inherently relational work. Leaders designing the intelligent enterprise must guard against both. The Player-Coach role is only valuable if the coach has genuine time and space for connective work — not if their performance is measured purely in throughput metrics.

Alongside this, leaders must contend with what researchers are calling "algorithmic anxiety" — a complex psychological response that is not simply fear of job loss, but a deeper uncertainty about human value, professional identity, and the meaning of work in a world where agents can perform what were previously skilled tasks. This anxiety is real, it is measurable, and it will derail transformation programmes that do not address it directly.

The evidence on what works is clear. A global study by MIT Sloan Management Review and Boston Consulting Group — based on 1,741 managers across 100 countries and 20 industries — found that organisations where employees personally derive value from AI are 5.9 times as likely to achieve significant financial benefits from AI compared with organisations where employees do not (MIT SMR / BCG, "Achieving Individual and Organizational Value With AI," 2022). The question for leaders is not "how do we get our people to accept AI?" It is "how do we design AI deployment so that our people experience it as expanding what they can do, not shrinking who they are?"

The Human-First Advantage — MIT Sloan / BCG Research (1,741 managers, 100 countries)
5.9× more likely to achieve significant financial benefits from AI When employees personally derive value from AI use vs baseline financial benefit from AI When employees do not personally benefit from AI Source: MIT SMR / BCG "Achieving Individual and Organizational Value With AI," 2022 · n=1,741 managers

McKinsey's State of Organisations 2026 puts the investment calculus plainly: for every dollar spent on technology, five dollars should be spent on people. Only fourteen percent of organisations have senior leaders consistently championing AI adoption. The leadership gap is larger than the technology gap. And that investment in people must be invested with Pugh's insight in mind: not just in training programmes and change management workshops, but in the organisational conditions — time, space, relational trust — under which the connective work that makes an enterprise coherent can actually happen.


06 — The Future

Three Horizons: From Augmentation to the One-Person Enterprise

The intelligent enterprise is not a destination — it is a trajectory. The organisations moving along it are passing through recognisably distinct phases, each with its own logic, challenges, and leadership requirements. Understanding which horizon you are in, and what the next one demands, is the most important strategic clarity available to a senior leader right now.

Now · Most orgs
Horizon 1
Augmented Organisation
5–20% productivity gains
Now · Leaders
Horizon 2
Hybrid Organisation
Transformational value
Emerging
Horizon 3
Composable Enterprise
Ultra-lean, capability-native
1

The Augmented Organisation

AI assists human workers within existing structures. Productivity gains are real but incremental (5–20%). The hierarchy persists; agents operate within it. Most organisations are here today. The risk is treating this as the destination rather than the starting point.

2

The Hybrid Organisation

Humans and agents share team membership and accountability. The hierarchy flattens. Coordination moves from management layers to shared world models. New roles emerge: DRIs, Player-Coaches, AI Agent Owners. The organisations taking this seriously — Block, JPMorgan, Amazon — are here now. Value gains move from incremental to transformational.

3

The Composable Enterprise

The organisation is designed capability-first using composable business architecture. Agents operate autonomously within well-governed capability domains. Human teams are small, deeply expert, and almost entirely focused on judgment, strategy, and connective work. The first examples are emerging. This is where the one-person enterprise lives.

The Headcount-Revenue Relationship Is Being Severed — Revenue Per Employee (USD M)
$0 $2M $4M $5M+ $0.25M Traditional SaaS $0.5M JPMorgan (est.) $4.7M Midjourney (107 employees) $200M+ Medvi (2026) (2 employees, unaudited) Sources: Company reports, PYMNTS / NYT April 2026, Midjourney verified revenue 2025

Horizon Three: Agentic Leverage and the Ultra-Lean Enterprise

Sam Altman publicly predicted a one-person billion-dollar company in 2023. Dario Amodei gave it seventy to eighty percent odds for 2026. On April 2, 2026 — two days ago — the New York Times published the story of Matthew Gallagher and Medvi. It is a remarkable proof of concept — but the lesson it actually teaches is more nuanced, and more instructive, than the headline suggests.

Gallagher launched Medvi, a GLP-1 telehealth business, from his Los Angeles home in September 2024 with $20,000. In its first full year, Medvi reached $401 million in self-reported sales — with a net profit margin of 16.2%, nearly three times that of its largest publicly traded competitor, Hims & Hers, which employs over 2,400 people. The company is tracking toward $1.8 billion in 2026 revenue.

But Medvi is not really a one-person company. Gallagher hired his brother, uses contract engineers and account managers, and — critically — outsourced the entire regulated medical stack to specialist partners: licensed physicians, prescription processing, pharmacy fulfilment, and regulatory compliance were all handled by CareValidate and OpenLoop Health, companies with their own workforces. In March 2026, the FDA issued warning letters to Medvi and dozens of similar telehealth companies over compounded GLP-1 drug safety concerns. And the $401M revenue figure is unaudited.

What Medvi actually demonstrates — and this is the correct and genuinely important lesson — is agentic leverage in customer-facing capabilities. Gallagher retained ownership of one thing: the customer relationship and the intelligence layer above existing infrastructure. Everything else — the regulated backbone, the physical operations, the medical liability — was composed from external partners. He concentrated entirely on his differentiating capability and let agents and outsourced specialists handle everything else. The unit costs of building, acquiring customers, and managing operations were low enough that a tiny team could operate what previously required a company of hundreds.

What does your company understand that is genuinely hard to understand — and is that understanding getting deeper every day? If the answer is nothing, AI is just a cost optimisation story. If the answer is deep, AI doesn't augment your company. It reveals what your company actually is.

Jack Dorsey & Roelof Botha, Block / Sequoia Capital, March 2026
Revenue per Employee — The Agentic Leverage Shift
Traditional Enterprise
$200–300K
Strong SaaS Co.
$500K
Midjourney (2025)
$4.7M
Medvi est. (2025)*
~$200M+
*Medvi revenue unaudited; illustrative based on reported $401M revenue across ≈2 full-time employees. PYMNTS / NYT, April 2026.

A cleaner illustration of the structural shift is Midjourney: $500 million in verified revenue in 2025 with approximately 107 employees — roughly $4.7 million in revenue per employee, compared to the $200,000–$300,000 per employee that has long been considered strong performance for a SaaS business. No external medical compliance partners required. Solo-founded startups now represent 36% of all new ventures. Sequoia Capital has begun adjusting its underwriting models to account for what it calls "agentic leverage" — the ability of tiny teams to produce outsized output through AI orchestration.

The one-person enterprise is not the model for every organisation, and the conditions that enabled Medvi — a standardised product category, an outsourceable regulated backbone, a purely digital customer relationship — are specific, not universal. Consumer software, developer tools, and proprietary trading are the categories most analysts identify as most likely to see ultra-lean enterprises first. Industries with significant physical operations, embedded human relationships, or non-outsourceable regulatory liability — healthcare delivery, construction, professional services, financial advice — will reach Horizon Three through a different path and on a longer timeline.

But the structural signal is real and matters for all organisations: the relationship between headcount and revenue, a correlation that has governed business economics for two centuries, is being disaggregated. The implication for large incumbents is not "how do we become Medvi?" They have capabilities a solo founder cannot replicate — deep proprietary data, regulatory relationships, trusted brand, established customer networks, and Pugh's connective tissue woven across thousands of human relationships built over years. The question is whether they can deploy agentic leverage within their differentiating capabilities fast enough to outpace the founders who can now build in one year what took a decade.


07 — Leadership Priorities

Five Things Leaders Should Do Now

This is not a technology initiative. It is an organisational redesign — and it requires the same clarity of intent, personal sponsorship, and sustained investment that any major strategic transformation demands. It also requires something that most transformation frameworks understate: you cannot dismantle a hierarchy while it is still running the business.

Harvard Business School's John Kotter identified this tension as the central challenge of large-scale organisational change. His answer, developed in Accelerate (Harvard Business Review Press, 2014), was what he called a dual operating system: the existing hierarchy continues to run daily operations reliably and efficiently, while a parallel network structure — staffed by volunteers from within the hierarchy — designs and pilots the new model. The hierarchy is "optimised for the day-to-day activities" while the second system is "devoted to innovation and the design and implementation of strategy." The two operate in concert, not in competition. Transition happens through progressive expansion of the network, not through a single disruptive cutover.

Applied to the intelligent enterprise, this means: your existing management structure does not disappear on Tuesday while composable business architecture is built. You identify one capability domain, stand up a pilot that demonstrates the new model, build confidence and capability, and expand from there. The following five priorities are the sequence in which this work must actually be done.

Five Leadership Priorities — In Sequence
01 — Apply Composable Business Architecture Decompose your business into discrete capabilities. Classify: differentiating, enabling, commodity. Produce your human-agent allocation surface. Before you deploy a single agent. 02 — Redesign Roles Before You Scale Define what humans own (judgment, ethics, connective labor) and what agents own. Create IC / DRI / Player-Coach architecture. Do not automate existing roles. 03 — Invest 5× More in People Than Technology Genuine AI literacy, not one-off workshops. Protect connective labor as an operational choice. Address algorithmic anxiety directly. McKinsey: $1 tech → $5 people. (State of Orgs 2026) 04 — Build Governance Before You Scale Big G: central ethics, security, risk thresholds. Little g: team-level operational autonomy. Only ~30% of orgs have mature agent governance. Governance is not the enemy of speed. It enables it. 05 — Start with One Capability, Prove, Then Expand Pick a domain with rich data, clear boundaries, manageable failure stakes. Prove role architecture + governance. Then expand through Kotter's dual operating system.

1. Apply composable business architecture before you deploy agents

Before asking "where can AI help?", apply composable business architecture to decompose your business into its discrete capabilities. For each one, ask: What does this capability need to produce? What data exists to operate it? Which aspects require human judgment and which can be systematised? Which capabilities are genuinely differentiating — your strategic moats — and which are enabling or commodity? This mapping produces both the investment strategy and the human-agent allocation surface. Without it, you are deploying AI into a functional structure that will constrain it.

2. Redesign roles before you scale

Do not automate existing roles. Define, with genuine specificity, what humans should own in the agentic enterprise — judgment, ethics, strategy, connective labor, novel situations, accountability — and what agents should own. Create the role architecture: ICs as orchestrators, DRIs as outcome owners, Player-Coaches as human developers. Mercer's human-agent workforce research is clear that traditional job architectures do not capture this adequately; role definitions must be explicitly updated to include responsibilities for managing agent performance, reviewing outputs, and handling exceptions. Crucially, define what "managing agent performance" means in your context before you deploy — not after a failure forces the question.

3. Invest five times more in people than in technology

McKinsey's data is unambiguous: for every dollar spent on technology, five should be spent on people. This means training that builds genuine AI literacy, not one-off workshops. It means creating safe environments to experiment and fail. It means addressing algorithmic anxiety directly and transparently. And it means protecting Pugh's connective labor deliberately — not just as a cultural aspiration, but as an operational design choice. The Player-Coach role only delivers its value if coaches have genuine time and space for mentorship and human connection. If you measure their performance purely in throughput and output metrics, you will squeeze out exactly what makes them valuable. Organisations that achieve durable AI transformation are, per MIT Sloan research, those where employees experience AI as expanding their autonomy and competence — not those with the most sophisticated technology stack.

4. Build governance before you scale, not after

Establish Big G guardrails centrally — ethical principles, data security, risk thresholds, oversight requirements, audit trails — before the volume of deployed agents makes governance retroactive. Then empower teams with little g operational freedom within those guardrails. McKinsey's AI Trust Maturity Survey (2026, n≈500 organisations) found that only around thirty percent of organisations have reached meaningful maturity in agentic AI governance — and that organisations investing in responsible AI infrastructure are significantly more likely to achieve material financial returns. Security and risk concerns remain the top barrier to scaling agentic AI, cited by two thirds of organisations surveyed. Governance is not the enemy of speed. Done right, it is its precondition.

5. Start with one capability, prove the model, then expand

Pick one capability domain where the data is rich, the boundaries are clear, and the stakes of failure are manageable. Design the full model there first — the role architecture, the governance, the human-agent handoffs, the connective labor protections. Prove it works. Build the organisational confidence and capability to manage human-agent teams. Then expand through Kotter's dual operating system logic: the pilot network grows as it demonstrates results, while the hierarchy continues to run the business reliably.

A note on sector variation

The pace and shape of this transition is not uniform across industries, and senior leaders in heavily regulated or relationship-intensive sectors should calibrate accordingly. Technology sector organisations currently account for 46% of AI agent deployments globally; healthcare and life sciences represent just 4%, manufacturing 3% (Lyzr/Master of Code research, 2025/26). This gap reflects genuine structural differences — not a failure of ambition.

In financial services, regulated compliance requirements demand human-in-the-loop controls for most consequential decisions. But the sector is also the most advanced in demonstrating what is possible within those constraints. Allianz Technology SE implemented a multi-agent claims processing system — seven specialised agents handling coverage verification, fraud detection, and automated payouts — that reduced processing time by approximately 80% from a baseline of around 100 days, while maintaining full audit trails for regulatory compliance (AWS / Amazon re:Invent, 2025). Banks are deploying agentic fraud detection at transaction scale. The composable architecture is achievable in financial services — it simply requires that regulatory accountability be treated as a non-negotiable design input, not a post-deployment constraint.

In healthcare, the connective labor question is at its most acute. Pugh's research draws on healthcare precisely because the human relationship between clinician and patient is itself a mechanism of care — not just a delivery channel for technical expertise. Agentic AI in healthcare will find its greatest value in administrative and operational capabilities (scheduling, documentation, supply chain, prior authorisation) rather than in direct patient-facing roles. Healthcare is projected to be the fastest-growing agentic AI sector by CAGR at 48% through 2034 (Fortune Business Insights, Agentic AI Market Report, 2025) — but the distribution of where agents operate will look very different from a software company, and the protections around connective labor must be explicit design constraints, not afterthoughts.

In professional services — consulting, law, accounting, advisory — the differentiating capability is almost entirely human: the relational trust, contextual judgment, and professional accountability that clients pay for. Agents will transform the enabling and commodity capabilities within these firms (research, drafting, analysis, document management) dramatically. But the human is the product in a way that composable architecture must respect, not route around.

Closing Provocation

Two Questions Every Senior Leader Must Now Answer

The intelligent enterprise is not a destination you arrive at. It is a set of choices you make — about what your organisation is actually for, what it genuinely understands that is hard to understand, and what kind of human work you are willing to protect and invest in when the pressure is entirely in the other direction.

The leaders who will navigate this well are not necessarily the most technologically sophisticated. They are not necessarily the boldest in restructuring. They are the ones who hold two questions simultaneously — one about competitive advantage, one about human value — and refuse to sacrifice either for the other.

The Business Question

What does our organisation understand that is genuinely hard to understand — and is that understanding getting deeper every day we operate?

The Human Question

What does our organisation do that allows people to truly see each other — and are we protecting and deepening that, or slowly squeezing it out?

The first question — articulated most sharply by Dorsey and Botha in their March 2026 essay — determines whether you have a business that AI reveals and amplifies, or one that AI simply replaces. Organisations that cannot answer it concretely are not building an intelligent enterprise. They are running a cost optimisation programme with a visionary narrative.

The second question — the one Allison Pugh's decade of research demands we ask — determines whether the organisation you build is worth the people in it. Whether the humans who work alongside agents experience their work as more meaningful, not less. Whether the connective tissue that holds an enterprise together as a human institution is being protected or quietly dismantled in the name of efficiency.

Both questions have the same answer, if you're doing this right.

Sources & References

Primary Academic & Book Sources

Allison Pugh, The Last Human Job: The Work of Connecting in a Disconnected World, Princeton University Press, 2024 — Winner, Distinguished Scholarly Book Award, American Sociological Association  ·  John P. Kotter, "Accelerate!" Harvard Business Review, November 2012; expanded as Accelerate: Building Strategic Agility for a Faster-Moving World, Harvard Business Review Press, 2014  ·  Ronald Coase, "The Nature of the Firm," Economica, 1937  ·  California Management Review, "From Coase to AI Agents: Why the Economics of the Firm Still Matters," April 2025

Strategy & Research Firms

Jack Dorsey & Roelof Botha, "From Hierarchy to Intelligence," Block / Sequoia Capital, March 31, 2026  ·  McKinsey & Company, "Seizing the Agentic AI Advantage," June 2025  ·  McKinsey & Company, "State of AI 2025: Agents, Innovation, and Transformation"  ·  McKinsey & Company, "State of Organisations 2026"  ·  McKinsey & Company, "State of AI Trust in 2026: Shifting to the Agentic Era," March 2026  ·  McKinsey & Company, "What's Your Superpower? Building Institutional Capability for Competitive Advantage," 2023  ·  Gartner, "40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026," August 2025  ·  Gartner, "Over 40% of Agentic AI Projects Will Be Canceled by End of 2027," June 2025  ·  Gartner, "Top Strategic Technology Trends for 2026," October 2025  ·  BCG, "Leading in the Age of AI Agents: Managing the Machines That Manage Themselves," November 2025  ·  BCG, "Platform Operating Models," 2023  ·  Salesforce, AgentForce platform documentation, 2025–26  ·  Anthropic, Model Context Protocol (MCP) specification, 2024–25  ·  Deloitte, "Configuring the Transformation Roadmap Using the Capability Model," 2025  ·  Mercer, "Unlocking the Potential of the Human-Agent Hybrid Workforce," 2026  ·  Writer, "AI Agent Owner Role: New Org Chart for the Agentic Enterprise," 2026  ·  MIT Sloan Management Review, "Achieving Individual and Organisational Value With AI"

Industry & Sector Sources

PYMNTS / New York Times, "The One-Person Billion-Dollar Company Is Here" (Medvi / Matthew Gallagher), April 2, 2026  ·  TechCrunch, "AI Agents Could Birth the First One-Person Unicorn — But at What Societal Cost?" February 2025  ·  Lyzr / Master of Code, AI Agent Deployment by Sector Statistics, 2025/26  ·  Fortune Business Insights, Agentic AI Market Size Report, 2025  ·  AWS / Amazon re:Invent 2025, "Financial Institutions Advance Agentic AI" (Allianz claims processing case)  ·  IDC White Paper, "Cloud Migration and Modernization: Healthcare, Financial Services, and Manufacturing," February 2026  ·  HBR, "The Last Mile Problem Slowing AI Transformation," March 2026  ·  Deloitte, "State of AI in the Enterprise," 2026