Valuation Implied Cognition Load (VICL™)

Tokenized Cognition Model (TCM™)

An economic framework that defines how AI systems generate value through metered computational work rather than software access. Under TCM, revenue scales with total system activity — the volume of work performed — rather than the number of users or seats acquired.

TCM identifies three primary drivers of value: Agent Density, Cognition Intensity, and Loop Persistence. These combine to produce Cognition Throughput, the core output metric of the framework.

TCM represents a structural shift from access-based computing to work-based computing. It aligns AI economics with labor economics rather than software economics, and expands the total addressable market from global software spending to the global labor market — estimated at $60–70 trillion in annual wage expenditure.

See also: Cognition Throughput, Agent ARPU, Valuation Implied Cognition Load

Cognition Throughput (CT)

The total productive output of an AI system, expressed as the combined function of its three primary drivers:

CT = AD × CI × LP

where AD is agents per user, CI is tokens per task-hour, and LP is an autonomy coefficient from 0 to 1.

CT replaces traditional SaaS metrics — ARR, seat count, ARPU — as the primary measure of value creation in AI-native systems. It is the central variable through which TCM translates system activity into economic output.

See also: Agent Density, Cognition Intensity, Loop Persistence, Agent ARPU

Agent Density (AD)

The number of agents deployed per user, system, or organization. Agent Density is the first primary driver of Cognition Throughput.

As AD increases, a fixed user base generates proportionally more system activity and revenue. This decouples growth from user acquisition — a structural difference from SaaS economics, where revenue scales with users rather than with the work each user deploys.

See also: Cognition Throughput, Agent ARPU

Cognition Intensity (CI)

The volume of tokens consumed per task, multiplied by task frequency. Cognition Intensity is the second primary driver of Cognition Throughput.

CI captures the depth and complexity of work performed by agents. A system executing shallow, infrequent tasks carries low CI. A system executing complex, chained, high-frequency workflows carries high CI. As models improve and token costs decline, CI is expected to increase across deployments as tasks that were previously cost-prohibitive become economically viable.

See also: Cognition Throughput, Loop Persistence

Loop Persistence (LP)

The degree to which tasks are executed continuously and autonomously, expressed as an autonomy coefficient from 0 to 1. Loop Persistence is the third primary driver of Cognition Throughput.

LP captures the extent to which workflows operate without human initiation or intervention. A fully supervised, manually triggered workflow has low LP. A fully autonomous, self-sustaining multi-agent loop approaches LP of 1. Increasing LP is the mechanism by which AI systems transition from tools that assist human productivity to infrastructure that independently generates economic output.

See also: Cognition Throughput, Agent Density

Agent ARPU (A-ARPU™)

The revenue generated per agent-enabled user. Unlike traditional ARPU, which measures the price of access to a platform, A-ARPU measures the volume of work performed through agents on behalf of that user.

A-ARPU is the monetization expression of Cognition Throughput. As Agent Density, Cognition Intensity, and Loop Persistence increase for a given user, A-ARPU rises — independent of whether that user's subscription price changes. This creates a compounding revenue dynamic that has no direct equivalent in SaaS economics.

See also: Cognition Throughput, Tokenized Cognition Model

Valuation Implied Cognition Load (VICL™)

The estimated total work — measured in tokenized computation — that an AI system must perform to justify its market valuation. VICL is a reverse-engineering framework that translates a stated valuation into the level of agent activity, token consumption, and autonomous execution required to support it.

The methodology works backward through four steps:

  1. Start with market valuation
  2. Infer the revenue levels required to justify it
  3. Translate required revenue into token consumption at current and projected pricing
  4. Translate token consumption into required levels of Agent Density, Cognition Intensity, and Loop Persistence

This inverts the standard valuation question. Rather than asking whether a company's user base justifies its price, VICL asks: given this price, what must the system actually do? It then stress-tests whether those implied activity levels are achievable given current adoption curves, token pricing trajectories, and deployment infrastructure.

VICL produces three interpretive outcomes:

Low VICL — modest agent adoption and token usage are sufficient to justify valuation. The market is pricing in near-term, achievable activity levels.

Moderate VICL — meaningful agent deployment and recurring autonomous workflows are required. The valuation implies committed enterprise adoption and expanding loop persistence.

High VICL — large-scale, continuously operating multi-agent systems are required. The valuation is pricing a future state of infrastructure-level AI deployment that has not yet materialized.

VICL is most useful in evaluating AI-native companies where traditional metrics — revenue multiples, user counts, ARR — fail to capture the nonlinear scaling dynamics of token-based economics. It shifts valuation analysis from growth assumptions to system activity requirements, aligning financial expectations with the actual mechanics of digital labor.

See also: Cognition Throughput, Tokenized Cognition Model, Digital Labor Economics

Digital Labor Economics

The emerging economic system in which AI agents perform work previously performed by humans, priced by volume of output rather than time or access. Digital Labor Economics treats tokens as the unit of work, agents as the unit of labor, and Cognition Throughput as the unit of productive output.

Under this system, the relevant market is not software spend — it is the portion of global labor that can be performed, augmented, or replaced by autonomous AI systems. At current trajectories, this represents the largest expansion of addressable market in the history of the technology industry.

Digital Labor Economics is the conceptual foundation of the TCM framework and the context within which VICL, A-ARPU, and Cognition Throughput are defined.

See also: Tokenized Cognition Model, Valuation Implied Cognition Load, Agent ARPU

Read Signal Briefs: The emergence of digital labor economics

Read Frameworks: Agent Experience Integrity (AXI)

Read Lexicon: Digital Labor Economics (DLE)

Read Lexicon: Agent ARPU (A-ARPU)

Read Lexicon: Cognition Throughput

Read Lexicon: Loop Persistence

Read Lexicon: Cognition Intensity

← Back to exmxc Home → Explore Frameworks → Read Signal Briefs