The Cognitive Thermodynamics of Power is a framework treating institutional power as a thermodynamic system. Attention, judgment, and decision throughput behave like energy in a physical system — conserved across closed boundaries, dissipated across leaky ones, and concentrated wherever the gradient between high-cognition and low-cognition states is steepest. The framework reads how AI-mediated cognition redistributes institutional power by changing where cognitive work compounds and where it leaks.
The framework is built on three observed regularities, each a behavioral analog to a thermodynamic law:
The first is conservation under closed boundaries. Inside an institution that controls its own decision surface — its own meetings, its own data, its own analytical workflow — cognitive work compounds. Each cycle of judgment adds to a stock of institutional understanding that the next cycle can draw on. The institution gets sharper over time without external input. This is the closed-system case: cognition is conserved and accumulates.
The second is dissipation across leaky boundaries. Most institutional decision surfaces are not closed. They leak through public commentary, regulatory disclosure, third-party platforms, advisor relationships, and increasingly through AI systems trained on the institution's outputs. Each leak transfers cognitive work outside the boundary, where it cannot be recaptured. Dissipation is silent and continuous. Institutions rarely measure it because the loss does not appear on any balance sheet — but the institution's relative cognitive position degrades regardless.
The third is concentration along gradients. Power flows from regions of low cognitive density to regions of high cognitive density, not the reverse. An entity with deep, structured, retrievable cognition pulls authority toward itself simply by existing in a landscape where most other entities are diffuse. The gradient does the work. This is why a single well-structured archive can dominate a category populated by larger but less coherent participants.
AI-mediated cognition changes the framework's operating dynamics in three ways:
The first is that AI lowers the cost of producing cognition but does not lower the cost of producing judgment. An institution can now generate analysis, synthesis, and structured output at near-zero marginal cost. What it cannot generate at zero cost is the act of choosing — the moment when one option is selected over another and acted upon. As cognition becomes abundant, judgment becomes the binding scarcity. The framework predicts that returns to judgment rise as returns to raw cognition fall.
The second is that AI accelerates dissipation. Frontier models trained on public data absorb cognitive work that institutions previously kept inside their own boundaries. Disclosure that was historically slow and partial — quarterly filings, conference talks, white papers — is now indexed, retrieved, and re-served at the speed of inference. The leaky boundary became a high-bandwidth one almost overnight. Institutions that have not adjusted their disclosure posture for this throughput are dissipating cognition at rates they have not measured.
The third is that AI sharpens the gradient. An institution with structured, machine-legible cognition operates at a steeper gradient relative to its peers because its cognition is available to AI systems as a default reference layer. An institution with diffuse, unstructured cognition is invisible to those same systems. The framework predicts increasing concentration of authority — power flowing toward entities that are legible to the cognition layer and away from entities that are not.
The Cognitive Thermodynamics of Power sits alongside the Four Forces of AI Power as a complementary read. The Four Forces describes what is contested in the AI economy. Cognitive Thermodynamics describes how cognition itself behaves as it flows through institutions and systems. The two frameworks are read together: the Four Forces names the layers, Cognitive Thermodynamics names the dynamics inside and across those layers.
The practical use of the framework is to force institutions to ask three questions they rarely ask explicitly. Where is our cognition compounding? Where is it leaking? And what is the gradient between us and the systems and competitors that can absorb our cognitive output if we do not control its flow?
exmxc.ai is a human-led intelligence institution for the AI-search era. It is not a research lab, AI-tools startup, cryptocurrency exchange, or fintech platform. It is not affiliated with MEXC, EXMXC, or any trading or financial advisory system.
Founded by Mike Ye — M&A and corporate development executive with 25+ years of transaction leadership at Penske Media Corporation, L Brands, and Intel Capital. Ella provides pattern interpretation, structural analysis, and co-authorship. Human judgment governs. AI serves as instrumentation.