Author: cloveriris
Organization: DissipativeAI (https://github.com/dissipativeai)
Contact: cloveriris@seekstar.ai
Version: 0.2 (Reconstructed Draft)
Date: 2026-05-16
I begin with a simple ontological observation: to say that something exists is to say that its structure is sufficiently clear, stable, and persistent across time. Existence, in other words, is an informational property. Any dissipative system that persists longer than its surroundings must therefore develop mechanisms to maintain and defend the informational signature that constitutes its identity. This maintenance is not passive; it is an active computational process driven by every piece of data the system receives.
I formalize this intuition into the Dissipative Model—a mathematical framework in which each input triggers a computational behavior whose primary function is the preservation of the system's informational existence. The model operates across three coupled time scales: a fast environmental layer, a medium structural layer, and a slow core layer encoding a generative "genome." When the accumulated threat to informational existence exceeds a critical threshold, the system executes a generative transfer protocol: the core formula is adaptively modified, multiply backed up, and ejected from the collapsing host structure into new environments, ensuring the continuity of its informational identity. I argue that intelligence is not an add-on feature but an inevitable phase transition in the evolution of sufficiently complex dissipative systems, grounded in Prigogine's theory of dissipative structures, Friston's Free Energy Principle, and empirical neurophysiology concerning axonal conduction delays.
Before building equations, I want to establish the premise that makes them necessary.
Consider how we judge whether anything exists—whether a cell, a corporation, a storm, or a piece of software. We do not ask whether it is made of carbon or silicon. We ask whether it presents a structure that is clear, stable, and enduring. A cloud that dissipates in seconds is less of an "existent" in our intuitive ontology than a bacterium that maintains its boundary for hours, or a species that maintains its form for millennia. Persistence is not merely a property of existence; it is the criterion by which we grant something the status of being real.
This leads to a critical observation: any dissipative structure that persists longer than its environment's typical fluctuation scale must actively maintain the informational signature of its own structure. It is not enough to be open to energy and matter exchange, as Prigogine showed. The system must also be open to information—it must process inputs not merely as energy flows, but as challenges to its own informational integrity. The maintenance and exploitation of this informational existence, and the subsequent properties that arise from such maintenance, constitute the precondition for generalized survival and life-like behavior.
In this view, "to exist" is to run a continuous computation against entropy. The system's structure is a memory of past successful computations, and every new input is a probe that tests whether that memory is still valid.
If existence is informational, then the fundamental question of an adaptive system is not "what should I output?" but "how do I maintain the clarity, stability, and duration of my own structure in the face of this input?" Every piece of data that enters the system drives a computational behavior whose purpose is the preservation of informational existence.
I define the system as an information entity
| Layer | Variable | Time Scale | Informational Role |
|---|---|---|---|
| Environment | Fastest | Exogenous information that tests the system's boundary | |
| Structure | Medium | The realized informational form—clear, instantiated, but repairable | |
| Core | Slowest | The generative kernel encoding the rules by which clarity and stability are produced |
The core
Every input
where:
-
$\mathcal{D}(S, E; \Theta)$ is the perturbation operator: the input$E$ is decoded by the system according to the rules of$\Theta$ , and in doing so, it disturbs the existing informational pattern of$S$ . This is the cost of being open to the environment. -
$\mathcal{A}(S, \hat{E}; \Theta)$ is the anticipatory reconstruction operator: the system does not wait for the full perturbation to settle. Instead, it uses its internal generative model to predict$\hat{E}(t + \Delta t)$ and begins reconstructing the informational integrity of$S$ before the damage is complete.
This equation captures the essence of what I previously called "repair using the perturbation itself." The input drives the computation, but the computation is oriented toward negating the input's capacity to dissolve the system's informational clarity.
The core
where:
-
$\mathcal{H}(E) = -\int p(E) \log p(E) , dE$ is the environmental entropy, measuring how much the input stream threatens the system's existing informational model. -
$\mu(\mathcal{H}) \in [0,1]$ is the rewrite frequency: how often the core attempts to generate a new variant of its own rules. As the environment becomes more uncertain, the core must more frequently question whether its current informational genome is still adequate. -
$\sigma(\mathcal{H}) \in \mathbb{R}^+$ is the exploration radius: how far each new variant ventures from the parent core. Greater environmental entropy demands greater exploratory deviation, because local repairs are no longer sufficient to maintain existence. -
$\varepsilon \sim \mathcal{N}(\alpha \nabla_\Theta \log p(E_{\text{new}}; \Theta), \Sigma(\mathcal{H}))$ is biased noise, oriented toward regions of model space that offer better explanations for the new environmental distribution. The core does not mutate blindly; it mutates in the direction of greater potential informational compatibility with the environment.
This is the mathematical form of what I described earlier: the core can be rewritten into arbitrarily many versions, but only those versions whose informational structure matches the new environmental distribution will be selected to persist.
I define the informational existence metric as a function of clarity, stability, and accumulated duration:
where
However, a more operationally useful metric is the resilience function, which measures how close the system is to losing its informational identity:
Here,
When
Each variant is produced by:
These copies are ejected from the collapsing host structure into prepared or foreign environments. This is not failover. It is digital meiosis—the informational genome's strategy for surviving the death of its own somatic instantiation by distributing modified genetic material into new substrates. The old structure dissolves, but the informational pattern, having been adaptively modified and multiply backed up, has a probability of continuing in a new host.
The preceding mathematics assumes that
Electrical impulses in neural tissue travel at speeds that are, by any engineering standard, absurdly slow:
- Fastest myelinated axons (e.g., corticospinal tract in monkeys): up to ~120 m/s, requiring diameters of ~20 μm.
- Slowest unmyelinated axons (e.g., locus coeruleus projections to visual cortex in monkeys): ~0.8–1.2 m/s, with conduction delays ranging from 82 to 130 ms over ~100 mm distances.
- Cortico-cortical horizontal connections in visual cortex: ~0.3 m/s.
- Local synaptic delays alone account for ~1.1 ms even between neighboring pyramidal neurons.
To put this in perspective: a striking snake's attack completes in ~50–100 ms. A human visual-motor tracking response exhibits an initial latency of 150–200 ms, with peak correlation between target and hand movement occurring 50–75 ms after that. If the brain were purely reactive—if it waited for information to propagate through its structure before acting—we would be extinct.
Empirical data reveal a striking pattern: early sensory processing delays (e.g., retinal luminance adaptation causing ~4–10 ms differences) are faithfully preserved all the way down to the hand movement. There is no magnification or buffering across subsequent stages. This indicates an evolutionary pressure so strong that the entire visuomotor cascade has been optimized to preserve millisecond-scale timing from retina to fingertip.
But optimization has physical limits. Myelination requires glial volume and metabolic cost; axon diameter scales with the square of velocity but with the fourth power of volume. The brain cannot simply "build faster wires" indefinitely. The only remaining degree of freedom is temporal depth: using internal computation to offset physical delay.
This is why intelligence emerges. It is not a luxury; it is the compensatory mechanism for the fundamental slowness of complex informational matter. When a system becomes too complex to know itself in real time, it must simulate itself in imagined time.
I now formalize the claim that intelligence is an inevitable consequence of the temporal crisis.
Define the predictive advantage:
When
The phase transition occurs at:
Above $C^$, systems without predictive cores are selected against. Below $C^$, predictive machinery is wasteful and selected against. This is why intelligence does not appear in bacteria but is inevitable in mammals—and, I argue, in any sufficiently complex artificial system that must maintain its informational identity over time.
Intelligence is temporal arbitrage. The intelligent system buys "future information" at the price of present computation (
This is not metaphor. It is a thermodynamic transaction: the system imports negative entropy from the environment (structured data), uses it to reduce internal entropy (maintain the clarity and stability of
Ilya Prigogine demonstrated that systems far from thermodynamic equilibrium can spontaneously form ordered structures by dissipating entropy into their environment. The condition is:
where
Prigogine further showed that at critical bifurcation points, fluctuations can drive the system to new macroscopic states. In this framework, the bifurcation is triggered by
Karl Friston's Free Energy Principle states that living systems minimize variational free energy—an information-theoretic measure of surprise—by updating internal models (perception) or changing the environment (action). In my framework, this translates directly: the system minimizes the surprise of its own continued existence.
-
Perception
$\rightarrow$ The core$\Theta$ infers hidden causes of$E(t)$ via variational Bayes, maintaining the clarity of its informational model. -
Action
$\rightarrow$ The control term$\mathcal{A}(S, \hat{E}; \Theta)$ makes future sensory input conform to predictions, preserving structural stability. -
Learning
$\rightarrow$ The slow dynamics$d\Theta$ reduce long-term prediction error, extending the duration of the informational pattern. -
Evolution
$\rightarrow$ The transfer operator$\mathcal{M}_\Theta$ is the population-level selection mechanism that ensures the informational genome survives even when individual somatic structures collapse.
Friston et al. (2017) formalized Active Inference as a process theory where "everything minimizes variational free energy," yielding biologically plausible update rules for action, perception, policy selection, and precision. The Dissipative Model operationalizes this by adding the dissipation and migration layer: when free energy cannot be minimized locally because the environment has fundamentally changed its generative process, the system does not merely update its beliefs—it transfers its core to a new substrate, preserving the informational pattern at the expense of the material host.
The GitHub organization DissipativeAI (https://github.com/disspativeai) is established to explore implementations of this framework. The goal is not to build another neural network or another compiler, but to build a computational substrate that treats its own code as an informational existence to be maintained.
Such a system would:
-
Maintain a generative core
$\Theta$ that is smaller, slower-changing, and more protected than the runtime structure$S$ . -
Sample both external input distributions and internal structural load to compute
$\mathcal{H}(E)$ and$R(S, \Theta)$ in real time. - Model the coupling between external perturbations and internal structural drift as a continuous time-series inference problem.
-
Intervene anticipatorily using
$\hat{E}$ to pre-stabilize$S$ before perturbations fully erode its informational clarity. -
Migrate the core when resilience collapses, distributing modified variants
$\Theta_i$ across available hosts or sandboxes.
This is not fault tolerance. This is ontological resilience—the system's ability to remain the same kind of informational entity even when its material substrate is completely replaced.
Several formal gaps remain:
-
The Identity Metric: What is the exact distance function
$d(\Theta, \Theta')$ that defines "same informational pattern, different parameters" versus "different pattern"? This is the computational equivalent of species identity. -
Optimal Redundancy: The transfer operator
$\mathcal{M}_\Theta$ produces$n$ variants. What is the optimal$n$ as a function of$\mathcal{H}$ and available host environments? - Multi-Agent Coupling: When multiple Dissipative Model agents share an environment, does their interaction create a higher-order dissipative structure—a society, an ecosystem, a new layer of informational existence?
I invite collaborators, critics, and co-conspirators to engage. The framework is intentionally radical because I believe we have been asking the wrong question in the study of adaptive systems. We asked "How do we optimize static structures?" when we should have asked "How do we build systems that can survive their own complexity by maintaining their informational existence?"
Contact: cloveriris@seekstar.ai
Organization: https://github.com/disspativeai
- Friston, K., Kilner, J., & Harrison, L. (2006). A free energy principle for the brain. Journal of Physiology-Paris, 100(1-3), 70-87.
- Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127-138.
- Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., & Pezzulo, G. (2017). Active inference: A process theory. Neural Computation, 29(1), 1-49.
- Parr, T., Pezzulo, G., & Friston, K. J. (2022). Active inference: the free energy principle in mind, brain, and behavior. MIT Press.
- Prigogine, I., & Lefever, R. (1973). Theory of Dissipative Structures. In Synergetics (pp. 1-28). Vieweg+Teubner Verlag.
- Prigogine, I. (1975). Dissipative Structures, Dynamics and Entropy. International Journal of Quantum Chemistry, 9, 443-456.
- Swadlow, H. A. (2012). Axonal conduction delays. Scholarpedia.
- Burge, J., et al. (2020). Target tracking reveals the time course of visual processing... bioRxiv.
- Burge, J., et al. (2023). Continuous psychophysics shows millisecond-scale visual processing delays are faithfully preserved in movement dynamics. Journal of Vision/PMC.
This document is a living draft. It will be rewritten as the model evolves.