Brain and Intelligence Lab

From Invisible Hand to the Undesignable Mind

The Social Division of Labor in the Age of AGI

Shi Gu, Published on March 29, 2026


The age of AGI is usually narrated as a story about capability: better models, wider automation, faster research, cheaper cognition. But the deeper shift is elsewhere. For most of history, intelligence entered production only through persons. AGI changes that. Intelligence is becoming externalized, callable, and socially organized through agents — and the central question becomes how humans and agents are linked through context, permissions, verification, memory, and responsibility.

Adam Smith's "invisible hand" offered one image of how social order could arise without being exhaustively designed. Hayek sharpened the point: the real problem of economic order is the use of knowledge dispersed across many actors and situations.[1] The same problem now returns in a different medium. If intelligence is distributed across humans, models, tools, and agents, society faces a new coordination problem: how is distributed intelligence organized into a workable order?

My argument is simple. The defining transformation of the AGI era is not stronger intelligence as such, but the emergence of a new mechanism of social coordination. The token economy provides its micro-foundation. Division of labor returns as a general principle of collaboration. And the resulting order cannot be fully designed from above. What emerges is what I will call the undesignable mind: a distributed social-cognitive order whose coherence is real, even though no single actor fully plans or contains it.

1. The Token Economy and the Deflation of Raw Intelligence

To see why this order is historically novel, it helps to begin with the token. At first glance the token looks like a technical artifact: a unit of text segmentation used for pricing, inference, and optimization in language models. Economically, however, it is more than that. The token is the first fine-grained unit through which cognitive activity itself becomes measurable, divisible, and exchangeable. It is not merely a unit of language. It is the smallest practical unit in which intelligence — understood as the capacity to perform cognitive work on behalf of others — can be socially processed: counted, routed, divided, recombined, and paid for.

That matters because earlier societies priced cognition only indirectly. Labor was coordinated through coarse institutional forms: hours, contracts, occupations, salaries, departments. Intelligence entered production through embodied labor, and therefore through the person. Tokenization introduces a different granularity. It breaks cognitive work into sequences of computation, attention, context retrieval, and output generation. In doing so, it creates the material conditions under which intelligence can be treated not just as a human attribute, but as a callable resource.

A token therefore performs several functions at once. It is a unit of computation. It is a unit of attention bandwidth. It is a carrier of memory, instruction, and situational detail. And increasingly, it is a unit of coordination: tokens mediate not only between user and model, but between model and tool, and between one agent and another. They are not just the fuel of intelligence. They are part of the medium through which intelligence acquires social form.

Yet the token economy contains a paradox from the beginning. Precisely because tokens make cognition measurable and tradable, they also expose it to deflationary pressure. Recent evidence already points in this direction. The Stanford AI Index reports that the cost of querying a model performing at roughly GPT-3.5 level fell by more than 280-fold between late 2022 and late 2024.[2] Current API markets display the same structural trend: tiering, caching, batching, open-weight competition, and increasingly fine-grained price pressure.[3] Raw token generation is therefore unlikely to remain the durable center of value. It is moving, over time, toward something closer to utility-like supply.

The deeper reason is not only engineering progress. It lies in an asymmetry between the production of knowledge and the transfer of knowledge. Producing new knowledge is search-heavy, uncertain, and path-dependent. It involves failed hypotheses, local context, tacit judgment, and costly experimentation. But once knowledge has been rendered into language, examples, procedures, traces, or stable model behavior, it becomes easier to compress, imitate, and redeploy. In machine learning, this is the logic of distillation: a cumbersome model's capability can be transferred into a smaller and cheaper one.[4] In a broader social sense, distillation names a more general law: once intelligence has been made expressible, competition pushes toward cheaper forms of reproducing it.

Language matters here in a particularly deep way. Language is not just a communication layer placed on top of intelligence. It is the medium through which knowledge becomes transferable across agents, organizations, and generations. Once knowledge can be encoded in publicly manipulable symbolic form, some version of distillation becomes structurally difficult to avoid. What is expensive is often the discovery of a capability. What becomes cheaper is its repeated reproduction once encoded.

The pattern is already visible. The 280-fold price decline cited above was not driven by a single firm's pricing decisions. It was driven by open-weight replication, distilled variants running on consumer hardware, and competitive entry by providers who could reproduce the capability without bearing the original research cost. The knowledge embedded in frontier models was expensive to produce. Its reproduction became cheap enough to commoditize. This is the structural logic of tokenized intelligence working itself out in historical time.

For that reason, tokenization should be understood not only as the creation of a market for intelligence, but also as the beginning of its commoditization. Selling raw tokens, by itself, is unlikely to remain the main locus of durable value capture. The token layer will flatten. It will not disappear, just as bandwidth or electricity do not disappear. But it will become increasingly difficult to defend value at the unit level.

That is the crucial turn. As raw execution deflates, value migrates toward what resists flattening: context selection, task routing, memory, verification, authorization, and responsibility. Cheap execution is not the same as cheap coordination. The question becomes how this coordination takes shape.

2. Why Division of Labor Returns

If raw cognitive execution becomes cheap, the next question is unavoidable: how is it organized? Here the old principle of division of labor returns, but in a new medium. Division of labor is not a historically accidental feature of human society, nor merely a consequence of biological limitation. It is a general principle of social collaboration under three conditions: relevant knowledge is dispersed, participants occupy heterogeneous positions, and coordination is costly. Whenever these conditions hold, no single actor can efficiently perform or coordinate all functions at once, and pressures toward specialization emerge.

That is exactly what happens once intelligence is externalized into agents. The old unity of cognition begins to dissolve. Intelligence no longer appears as one inseparable capacity housed in a person. It appears instead as a set of partially separable functions: retrieving context, interpreting tasks, planning, executing, checking, remembering, authorizing, escalating, absorbing failure. These functions can be bundled together in simple settings. But as scale, heterogeneity, and consequence increase, the bundle becomes unstable. Different functions face different constraints, and those constraints create pressure for differentiation.

This is not merely a matter of which system is "more intelligent." What matters is often positional rather than purely cognitive. Some agents have privileged access to context. Others accumulate task history. Others occupy trusted interface positions. Others are permitted to call tools, touch internal systems, or issue final approvals. The relevant heterogeneity lies not only in capability but in location: memory location, interface location, permission location, liability location. Specialization emerges because intelligence in the AGI era is distributed through uneven access to context, state, authority, and risk.

Coase helps clarify why such differentiation becomes durable. The issue is not whether one sufficiently general model could perform retrieval, planning, execution, checking, and authorization all at once. The issue is whether it is efficient to do so. Institutional boundaries, Coase argued, are shaped by transaction costs rather than by pure technical possibility.[5] The same is true inside an agent society. One agent may be optimized for retrieval, another for long-horizon planning, another for conservative verification, another for execution under strict permission boundaries. Once switching, monitoring, checking, and routing costs differ across functions, specialization becomes economically rational.

This is why division of labor should be treated as a principle of social collaboration rather than a special feature of human history. Wherever knowledge is dispersed, positions are heterogeneous, and coordination is costly, pressures toward specialization emerge. This was true for human societies. It will also be true for artificial collectives once intelligence becomes externalized into agents. The medium changes. The principle does not.

There is an important consequence here. The emergence of agent specialization should not be read as evidence against general intelligence. It is often the consequence of general intelligence becoming socially deployable. A sufficiently general intelligence makes it possible to externalize cognition across many tasks. But once cognition is externalized, the social conditions of its use generate differentiation. Generality enables distribution. Social coordination makes specialization inevitable.

3. Why the Resulting Order Cannot Be Fully Designed

Specialization may be inevitable. Its mature form is not therefore designable.

This is where the strongest temptation to centralize appears. If intelligence can be copied, accelerated, and coordinated at scale, why not simply design the whole system from above? Dario Amodei's image of "a country of geniuses in a datacenter" captures that temptation vividly.[6] It expresses something real about the scale of future capability. But it can also encourage an overly unified picture of what that capability implies. The issue is not only how much intelligence exists. It is how that intelligence becomes partitioned, routed, verified, authorized, and made accountable across many interacting nodes.

Hayek's point remains decisive here. The real problem of order is not optimization under fully given information, but the use of knowledge dispersed across actors and situations.[1] In an agent society, no planner — human or machine — can fully know in advance which fragments of context will matter, which permissions should be bundled together, which memory traces will later prove decisive, which verification route will be safest, or which interaction patterns will accumulate into stable functional advantage. These facts are not all available before action begins. They are discovered through execution, failure, repetition, local adaptation, and institutional learning.

The problem is sharper still because what is dispersed is not only knowledge but state. Relevant state includes local memory, workflow residue, trust, interface constraints, permission history, and authority gradients. Much of it is generated during action itself. The system is therefore doubly undesignable: first because relevant information is distributed, and second because the very execution of tasks changes the future topology of specialization by sedimenting experience and trust in some nodes rather than others.

One might object that a sufficiently powerful intelligence could overcome this problem — that a system capable of modeling all relevant agents, contexts, and interactions could centralize coordination efficiently. This is the strongest version of the planning objection, and it cannot be dismissed on grounds of mere computational difficulty. The deeper answer is that the relevant state is not given before action but generated through it. Each decision by each agent alters the landscape that future decisions must navigate. Execution is not merely the implementation of a prior plan; it is the process through which the conditions of coordination are created and discovered. Even a planner with unbounded processing power faces the fact that the territory it seeks to map is being redrawn by the act of mapping. This is not an information-processing limit. It is a temporal one.

Coase provides the complementary half of the argument. Even if one imagines a highly general system capable of retrieval, planning, execution, checking, and authorization, that does not settle whether those functions should remain unified. The real question is comparative: when is it cheaper to keep them bundled, when to split them into specialized internal modules, and when to expose them to more open exchange across distinct agents? Organizational boundaries arise because different forms of coordination carry different costs.[5] The same is true here. The boundaries between unified agents, modular stacks, internal hierarchies, and more open agent ecologies are not deduced from capability alone. They are discovered through relative coordination cost.

For that reason, the effective division of labor among agents cannot be fully specified ex ante. One can design protocols, interfaces, evaluation procedures, permission systems, and pricing schemes. One can impose provisional roles. But stable roles are not merely assigned; they are found. Reputations are not simply declared; they are produced through repeated performance. Authority is not merely granted; it becomes socially effective when some nodes prove reliable in handling context, risk, escalation, and failure. One may design the conditions under which order forms. One does not fully design the concrete order itself.

This is what I mean by the undesignable mind. It is not a single giant machine-brain, nor a sovereign AGI actor that simply rules from above. It is a distributed cognitive order formed by humans, agents, platforms, protocols, and institutions, whose coherence is real but not fully represented anywhere.

The word "mind" here requires clarification. I do not mean a unified consciousness, nor a literal brain distributed across silicon. I mean something closer to what emerges when cognitive functions are divided across many actors, yet aggregate behavior displays a coherence that no single participant designed or fully comprehends. Adam Smith's market order exhibited this property: individual actors pursuing local ends produced system-level outcomes — price signals, resource allocation, emergent specialization — that no planner commanded. The undesignable mind is an analogous phenomenon in a richer medium. Where the market coordinated through prices and exchange, the agent society coordinates through context passing, memory accumulation, permission structures, and verification chains. The resulting order is cognitive in a way that market order was not, because the medium itself is intelligence rather than goods. But it shares the crucial feature: coherence without command.

This places the concept between two familiar pictures. On one side stands the singleton AGI — a unified superintelligent actor whose internal operations are, in principle, fully designable by its builders. On the other stands mere chaos: many agents acting without coordination, producing noise rather than order. The undesignable mind names the space between these poles. It is ordered enough to exhibit functional coherence — stable specializations, reliable routing, predictable authority gradients — yet too distributed and path-dependent for any single actor to specify its structure in advance. Its coherence is real. Its blueprint does not exist.

The most important disagreement with the "genius empire" intuition is therefore not about scale. It is about form. The more consequential possibility is not the rise of a single coherent artificial sovereign. It is the emergence of a dense ecology of interacting cognitive nodes whose effective order remains only partially legible even to its builders.

4. From Division of Labor to Relational Division of Labor

Once the problem is framed this way, the analysis must move beyond division of labor in the narrow sense. Classical division of labor asks who does what. But in a society of agents embedded in layered systems of memory, permissions, and verification, the more fundamental question is: who is connected to whom, through what interface, under what authority, with what residual responsibility, and with what accumulated context? The problem is not exhausted by task allocation. It is a problem of relational organization.

This is why the AGI era calls for a broader concept: relational division of labor. By this I mean a form of social differentiation in which functional position is determined not only by the task one performs, but by one's location within a network of context, memory, interface access, verification chains, and authority boundaries. In industrial society, specialization was often tied to embodied skill and role identity. In an agent society, specialization increasingly depends on whether a node has privileged context, long historical memory, a trusted interface position, the right to call tools, the right to escalate, the obligation to verify, or the authority to sign off. The scarce thing is no longer simply "who has the skill," but "who occupies the position from which a given function can be reliably and legitimately performed."

Consider a concrete case. In a large enterprise deploying multiple AI agents for software development, one agent may begin as a general-purpose code reviewer. Over months of operation, it accumulates context about the codebase's architecture, its recurring failure modes, and the implicit preferences of the engineering team. It becomes, in practice, the institutional memory of code quality — not because it was designed for that role, but because its accumulated context makes its reviews more reliable than those of any fresh agent with equivalent raw capability. Replacing it with a technically superior model would mean losing the embedded context that constitutes its actual social function. Its value lies not in its intelligence but in its position.

This shift matters because intelligence itself is increasingly reproducible, while relational embeddedness is not. A base model may be widely available. A fine-tuned derivative may be easy to copy. A demonstrated capability may be distilled and redistributed at lower cost. But not every node can inherit the same workflow history, permission bundle, trust relation, or liability-bearing status. As raw cognitive execution becomes cheaper, what becomes socially decisive is not intelligence in the abstract, but intelligence situated within a relation.

Seen this way, AGI society can be described as a gradual transformation of the social unit of production. In earlier economies, the primary unit was the individual worker, supplemented by tools and embedded in a firm. In the AGI era, the operative unit increasingly becomes the human-agent assemblage, and beyond that, the networked ecology of such assemblages. A person no longer acts alone, nor does an agent act in isolation. Each operates through bundles of delegated cognition, retained memory, constrained interfaces, and institutionalized handoffs. The relevant social form is therefore neither the autonomous individual nor the fully sovereign machine actor, but a layered topology of joint cognition. What distinguishes the assemblage from mere tool-use is the persistence of shared state: the agent retains memory, the human retains authority, and the relationship itself accumulates context that neither party fully owns.

5. Why Position Becomes the Decisive Scarcity

From this perspective, the evolution of AGI society can be sketched as a sequence of deepening relational differentiation. First comes tokenized cognition: intelligence becomes meterable, callable, and transferable in fine-grained units. Next comes internal specialization: organizations differentiate agents by function — retrieval, planning, execution, evaluation, memory, escalation, authorization. Then comes external modularization: some of these functions become portable across institutional boundaries and begin to behave more like market services than internal roles. Finally comes institutional consolidation: as distributed agent action scales, verification, authorization, and liability become more formalized and more centralized. These stages are not strictly sequential — they overlap and coexist — but they describe a structural tendency toward increasing relational complexity. What begins as distributed execution culminates not in pure decentralization, but in a hybrid order: open at the level of cognitive supply, increasingly structured at the level of social control.

This hybrid structure — the institutional consolidation stage at work — produces several regularities. Execution tends toward decentralization while responsibility tends toward centralization. As token costs fall and capabilities spread, more tasks can be delegated outward to specialized agents. But verification, final authorization, and consequence-bearing do not diffuse in the same way. They become more valuable precisely because abundant execution creates stronger demands for reliable filtering and accountable closure. Hierarchy therefore shifts from pure skill to positional authority. The highest-value positions are not necessarily those that perform the most cognitive work directly, but those that control context access, define interfaces, certify outputs, and bear final responsibility. Scarcity migrates upward: society moves from a world in which intelligence itself is scarce to one in which reliable coordination is scarce.

This also generates new tensions. Cheap cognition pushes toward open routing, modular competition, and substitutability. But the need for consistency, auditability, and responsibility pushes toward platforms, standards, certified interfaces, and concentrated authority. Capabilities may be portable, but effective position often depends on accumulated local state: past interactions, embedded trust, historical memory, and domain-specific legitimacy. Intelligence may become more broadly available, yet access to decisive contexts, interfaces, and verification chains may remain tightly structured. The new inequalities of AGI society may therefore arise less from raw access to intelligence than from unequal access to socially effective positions within the network of intelligence.

If intelligence itself becomes cheap while positional advantage does not, then the primary axis of inequality shifts from cognitive endowment to structural location. Those who control the interfaces through which intelligence is routed, the verification layers through which outputs are certified, and the memory architectures through which institutional context accumulates will occupy positions of durable advantage. Conversely, actors confined to executing tasks without positional context — whether human workers or substitutable agents — will face competitive pressure analogous to the commoditization of raw tokens.

At this point, the phrase "division of labor" becomes too narrow if it is understood only in its classical occupational sense. What is taking shape is not simply a finer allocation of tasks, but a new architecture of association. Agents and humans will not merely perform different roles; they will be linked through persistent chains of context, permissions, routing, validation, and liability. The mature AGI order is therefore unlikely to be a world ruled by one supreme intelligence or simply populated by many interchangeable agents. It is more likely to be a society in which intelligence is abundant but unevenly situated, widely callable but unequally authorized, broadly distributed but hierarchically organized through relations.

6. Conclusion: Not only Stronger Intelligence, but also a New Mechanism of Social Coordination

What has this analysis established? Not a prediction about which systems will prevail or which firms will capture value, but a structural claim about the form intelligence takes once it becomes socialized.

The token economy gives cognition a micro-economic medium — measurable, divisible, exchangeable. But that same medium exposes raw execution to deflation, making coordination rather than intelligence the scarce resource. Division of labor returns not as a historical echo but as a general law of collaboration: wherever knowledge is dispersed and coordination costly, specialization emerges whether the participants are human or artificial. Yet the form that specialization takes cannot be fully designed. Relevant state is generated through action, not given before it. Effective roles are found, reputations produced, and authority becomes socially real only through repeated performance.

The deepest implication concerns what becomes scarce. In a world of abundant intelligence, the critical bottleneck is not cognitive power but cognitive position: who controls the interfaces, who accumulates the context, who bears the responsibility, who certifies the output, who is trusted to close the loop. The key social unit is no longer the individual worker or the singular sovereign AGI, but the human-agent assemblage and the ecology of such assemblages. The emerging inequalities of this order will track positional advantage more than raw capability.

This is the undesignable mind: a distributed cognitive order coherent enough to exhibit stable structure, yet too path-dependent and dispersed for any actor to specify its form in advance. It is ordered but not commanded — legible in its effects but not fully representable from any single vantage point. That does not mean the order is ungovernable. It means that governance must operate through framework conditions — standards, interfaces, permission architectures, liability rules — rather than through the direct specification of roles. The invisible hand required legal infrastructure; the undesignable mind will require its own.

Where the invisible hand coordinated the division of material labor through prices, the undesignable mind coordinates the division of cognitive labor through context, position, and trust. The AGI era may ultimately be remembered not as the age in which machines became more intelligent, but as the age in which the decisive problem shifted from having intelligence to organizing its position.


References

  1. F. A. Hayek, "The Use of Knowledge in Society," The American Economic Review 35, no. 4 (1945): 519–530.
  2. Stanford Institute for Human-Centered Artificial Intelligence, AI Index Report 2025. The report highlights a more than 280-fold decline in the cost of GPT-3.5-level inference between November 2022 and October 2024.
  3. For independent tracking of per-token cost compression across providers, see Artificial Analysis (artificialanalysis.ai). The structural trend is also documented in the Stanford AI Index Report cited above.
  4. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean, "Distilling the Knowledge in a Neural Network," arXiv:1503.02531 (2015).
  5. Ronald H. Coase, "The Nature of the Firm," Economica 4, no. 16 (1937): 386–405.
  6. Dario Amodei, The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI (2026), especially the metaphor of "a country of geniuses in a datacenter."
← Back to all shares