Skip to content

DissipativeAI/DissipativeModel

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

The Dissipative Model: An Informational-Existential Framework for Adaptive Computation

Author: cloveriris
Organization: DissipativeAI (https://github.com/dissipativeai)
Contact: cloveriris@seekstar.ai
Version: 0.2 (Reconstructed Draft)
Date: 2026-05-16

Abstract

I begin with a simple ontological observation: to say that something exists is to say that its structure is sufficiently clear, stable, and persistent across time. Existence, in other words, is an informational property. Any dissipative system that persists longer than its surroundings must therefore develop mechanisms to maintain and defend the informational signature that constitutes its identity. This maintenance is not passive; it is an active computational process driven by every piece of data the system receives.

I formalize this intuition into the Dissipative Model—a mathematical framework in which each input triggers a computational behavior whose primary function is the preservation of the system's informational existence. The model operates across three coupled time scales: a fast environmental layer, a medium structural layer, and a slow core layer encoding a generative "genome." When the accumulated threat to informational existence exceeds a critical threshold, the system executes a generative transfer protocol: the core formula is adaptively modified, multiply backed up, and ejected from the collapsing host structure into new environments, ensuring the continuity of its informational identity. I argue that intelligence is not an add-on feature but an inevitable phase transition in the evolution of sufficiently complex dissipative systems, grounded in Prigogine's theory of dissipative structures, Friston's Free Energy Principle, and empirical neurophysiology concerning axonal conduction delays.


1. The Ontology of Informational Existence

Before building equations, I want to establish the premise that makes them necessary.

Consider how we judge whether anything exists—whether a cell, a corporation, a storm, or a piece of software. We do not ask whether it is made of carbon or silicon. We ask whether it presents a structure that is clear, stable, and enduring. A cloud that dissipates in seconds is less of an "existent" in our intuitive ontology than a bacterium that maintains its boundary for hours, or a species that maintains its form for millennia. Persistence is not merely a property of existence; it is the criterion by which we grant something the status of being real.

This leads to a critical observation: any dissipative structure that persists longer than its environment's typical fluctuation scale must actively maintain the informational signature of its own structure. It is not enough to be open to energy and matter exchange, as Prigogine showed. The system must also be open to information—it must process inputs not merely as energy flows, but as challenges to its own informational integrity. The maintenance and exploitation of this informational existence, and the subsequent properties that arise from such maintenance, constitute the precondition for generalized survival and life-like behavior.

In this view, "to exist" is to run a continuous computation against entropy. The system's structure is a memory of past successful computations, and every new input is a probe that tests whether that memory is still valid.


2. The Mathematical Framework: An Informational Existence-Driven Equation

If existence is informational, then the fundamental question of an adaptive system is not "what should I output?" but "how do I maintain the clarity, stability, and duration of my own structure in the face of this input?" Every piece of data that enters the system drives a computational behavior whose purpose is the preservation of informational existence.

2.1 The State Space as Informational Topology

I define the system as an information entity $\mathcal{X}$ operating across three layers, each representing a different mode of informational existence:

Layer Variable Time Scale Informational Role
Environment $E(t) \in \mathcal{E}$ Fastest Exogenous information that tests the system's boundary
Structure $S(t) \in \mathcal{S}$ Medium The realized informational form—clear, instantiated, but repairable
Core $\Theta \in \mathcal{T}$ Slowest The generative kernel encoding the rules by which clarity and stability are produced

The core $\Theta$ is the system's "informational genome." It does not contain the structure itself; it contains the recipe for reconstructing the structure's informational signature. The structure $S$ is the living instantiation—clearly patterned, locally stable, but expendable. The environment $E$ is the stream of perturbations that continually threatens to erode the clarity of $S$.

2.2 The Input-Driven Computational Behavior Equation

Every input $E(t)$ triggers a computational behavior $\mathcal{C}_E$ that acts upon the structure. I formalize this as an informational existence maintenance equation:

$$\frac{dS}{dt} = \mathcal{C}_E(S, \Theta) = \underbrace{\mathcal{D}(S, E; \Theta)}_{\text{decoding / perturbation}} + \underbrace{\mathcal{A}(S, \hat{E}; \Theta)}_{\text{anticipatory reconstruction}}$$

where:

  • $\mathcal{D}(S, E; \Theta)$ is the perturbation operator: the input $E$ is decoded by the system according to the rules of $\Theta$, and in doing so, it disturbs the existing informational pattern of $S$. This is the cost of being open to the environment.
  • $\mathcal{A}(S, \hat{E}; \Theta)$ is the anticipatory reconstruction operator: the system does not wait for the full perturbation to settle. Instead, it uses its internal generative model to predict $\hat{E}(t + \Delta t)$ and begins reconstructing the informational integrity of $S$ before the damage is complete.

This equation captures the essence of what I previously called "repair using the perturbation itself." The input drives the computation, but the computation is oriented toward negating the input's capacity to dissolve the system's informational clarity.

2.3 The Slow Dynamics of the Informational Genome

The core $\Theta$ does not change with every input. It evolves on a slower time scale, because it encodes the strategy for maintaining existence, not the tactical response. Its evolution is driven by the accumulated entropy of the environment:

$$d\Theta = \mu(\mathcal{H}) \cdot \sigma(\mathcal{H}) \cdot \varepsilon , dt$$

where:

  • $\mathcal{H}(E) = -\int p(E) \log p(E) , dE$ is the environmental entropy, measuring how much the input stream threatens the system's existing informational model.
  • $\mu(\mathcal{H}) \in [0,1]$ is the rewrite frequency: how often the core attempts to generate a new variant of its own rules. As the environment becomes more uncertain, the core must more frequently question whether its current informational genome is still adequate.
  • $\sigma(\mathcal{H}) \in \mathbb{R}^+$ is the exploration radius: how far each new variant ventures from the parent core. Greater environmental entropy demands greater exploratory deviation, because local repairs are no longer sufficient to maintain existence.
  • $\varepsilon \sim \mathcal{N}(\alpha \nabla_\Theta \log p(E_{\text{new}}; \Theta), \Sigma(\mathcal{H}))$ is biased noise, oriented toward regions of model space that offer better explanations for the new environmental distribution. The core does not mutate blindly; it mutates in the direction of greater potential informational compatibility with the environment.

This is the mathematical form of what I described earlier: the core can be rewritten into arbitrarily many versions, but only those versions whose informational structure matches the new environmental distribution will be selected to persist.

2.4 The Informational Existence Metric and Generative Transfer

I define the informational existence metric as a function of clarity, stability, and accumulated duration:

$$\mathcal{I}(S, \Theta; t) = \text{Clarity}(S) \times \text{Stability}(S, \Theta) \times \exp\left(-\int_0^t \frac{\mathcal{H}(E(\tau))}{\tau_p(\tau)} , d\tau\right)$$

where $\tau_p$ is the characteristic persistence time of the core's current variant. The exponential term captures the erosion of duration under sustained environmental uncertainty.

However, a more operationally useful metric is the resilience function, which measures how close the system is to losing its informational identity:

$$R(S, \Theta; t) = \exp\left(-\int_0^t \mathcal{H}(E(\tau)) \cdot \mathbb{I}[t_i(\tau) > t_p(\tau)] , d\tau\right)$$

Here, $t_i$ is the internal information propagation time, and $t_p$ is the environmental physical time. The indicator $\mathbb{I}[t_i > t_p]$ activates when the system is physically too slow to know its own state before the environment changes again. This is the temporal debt of informational existence: the system owes the environment a response, but it cannot pay because its internal signals have not yet arrived.

When $R < \delta$, the system triggers the generative transfer protocol:

$$\mathcal{M}_\Theta: \Theta \mapsto {\Theta_1, \Theta_2, \dots, \Theta_n}$$

Each variant is produced by:

$$\Theta_i = \Theta + \eta_i \cdot \nabla_\Theta \log p(E_{\text{new}}; \Theta) + \mathcal{N}(0, \sigma^2)$$

These copies are ejected from the collapsing host structure into prepared or foreign environments. This is not failover. It is digital meiosis—the informational genome's strategy for surviving the death of its own somatic instantiation by distributing modified genetic material into new substrates. The old structure dissolves, but the informational pattern, having been adaptively modified and multiply backed up, has a probability of continuing in a new host.


3. The Temporal Crisis: Why Information is Always Too Slow

The preceding mathematics assumes that $t_i$ can exceed $t_p$. I now show that this is not a special case but the universal condition of complex systems, using neurophysiological evidence.

3.1 The Glacial Speed of Biological Information

Electrical impulses in neural tissue travel at speeds that are, by any engineering standard, absurdly slow:

  • Fastest myelinated axons (e.g., corticospinal tract in monkeys): up to ~120 m/s, requiring diameters of ~20 μm.
  • Slowest unmyelinated axons (e.g., locus coeruleus projections to visual cortex in monkeys): ~0.8–1.2 m/s, with conduction delays ranging from 82 to 130 ms over ~100 mm distances.
  • Cortico-cortical horizontal connections in visual cortex: ~0.3 m/s.
  • Local synaptic delays alone account for ~1.1 ms even between neighboring pyramidal neurons.

To put this in perspective: a striking snake's attack completes in ~50–100 ms. A human visual-motor tracking response exhibits an initial latency of 150–200 ms, with peak correlation between target and hand movement occurring 50–75 ms after that. If the brain were purely reactive—if it waited for information to propagate through its structure before acting—we would be extinct.

3.2 The Evolutionary Necessity of Prediction

Empirical data reveal a striking pattern: early sensory processing delays (e.g., retinal luminance adaptation causing ~4–10 ms differences) are faithfully preserved all the way down to the hand movement. There is no magnification or buffering across subsequent stages. This indicates an evolutionary pressure so strong that the entire visuomotor cascade has been optimized to preserve millisecond-scale timing from retina to fingertip.

But optimization has physical limits. Myelination requires glial volume and metabolic cost; axon diameter scales with the square of velocity but with the fourth power of volume. The brain cannot simply "build faster wires" indefinitely. The only remaining degree of freedom is temporal depth: using internal computation to offset physical delay.

This is why intelligence emerges. It is not a luxury; it is the compensatory mechanism for the fundamental slowness of complex informational matter. When a system becomes too complex to know itself in real time, it must simulate itself in imagined time.


4. Intelligence as a Phase Transition in Informational Existence

I now formalize the claim that intelligence is an inevitable consequence of the temporal crisis.

4.1 The Critical Complexity Threshold

Define the predictive advantage: $$\Delta V = \frac{t_i - t_c}{t_p}$$

When $\Delta V < 0$, reactive strategies suffice; the system can maintain its informational existence by direct response. When $\Delta V > 0$, prediction becomes advantageous. But prediction has a cost: maintaining and updating $\Theta$ consumes metabolic or computational resources. The system will only invest in prediction when the expected survival gain exceeds the cost of modeling.

The phase transition occurs at: $$C^* = \inf { C \in \mathbb{R}^+ \mid \mathbb{E}[\text{informational persistence} \mid \Theta] \cdot P(\Theta \text{ accurate}) > \mathbb{E}[\text{informational persistence} \mid \text{reactive}] }$$

Above $C^$, systems without predictive cores are selected against. Below $C^$, predictive machinery is wasteful and selected against. This is why intelligence does not appear in bacteria but is inevitable in mammals—and, I argue, in any sufficiently complex artificial system that must maintain its informational identity over time.

4.2 Temporal Arbitrage as the Essence of Intelligent Maintenance

Intelligence is temporal arbitrage. The intelligent system buys "future information" at the price of present computation ($t_c$), and sells it at the higher price of survival time ($t_i - t_p$). The profit margin is the system's continued informational existence.

This is not metaphor. It is a thermodynamic transaction: the system imports negative entropy from the environment (structured data), uses it to reduce internal entropy (maintain the clarity and stability of $S$), and exports positive entropy (failed predictions, discarded structures). When direct reaction is too slow, the predictive model is simply the most efficient entropy pump available for maintaining informational existence.


5. Theoretical Grounding: From Dissipative Structures to Free Energy

5.1 Prigogine: Existence Through Dissipation

Ilya Prigogine demonstrated that systems far from thermodynamic equilibrium can spontaneously form ordered structures by dissipating entropy into their environment. The condition is: $$\dot{S} = \sigma + \Phi > 0$$

where $\sigma$ is internal entropy production and $\Phi$ is entropy flow to the environment. In the Dissipative Model, the runtime structure $S$ is the dissipative structure, the core $\Theta$ is the self-organizing kernel maintaining the far-from-equilibrium condition, and the "waste" expelled during core migration (old structures, failed variants) is the entropy $\Phi$ exported to the environment.

Prigogine further showed that at critical bifurcation points, fluctuations can drive the system to new macroscopic states. In this framework, the bifurcation is triggered by $\mathcal{H}(E)$ crossing a threshold, and the new state is a modified core $\Theta'$—a new informational genome adapted to a new environmental regime.

5.2 Friston: Minimizing Surprise to Maximize Existence

Karl Friston's Free Energy Principle states that living systems minimize variational free energy—an information-theoretic measure of surprise—by updating internal models (perception) or changing the environment (action). In my framework, this translates directly: the system minimizes the surprise of its own continued existence.

  • Perception $\rightarrow$ The core $\Theta$ infers hidden causes of $E(t)$ via variational Bayes, maintaining the clarity of its informational model.
  • Action $\rightarrow$ The control term $\mathcal{A}(S, \hat{E}; \Theta)$ makes future sensory input conform to predictions, preserving structural stability.
  • Learning $\rightarrow$ The slow dynamics $d\Theta$ reduce long-term prediction error, extending the duration of the informational pattern.
  • Evolution $\rightarrow$ The transfer operator $\mathcal{M}_\Theta$ is the population-level selection mechanism that ensures the informational genome survives even when individual somatic structures collapse.

Friston et al. (2017) formalized Active Inference as a process theory where "everything minimizes variational free energy," yielding biologically plausible update rules for action, perception, policy selection, and precision. The Dissipative Model operationalizes this by adding the dissipation and migration layer: when free energy cannot be minimized locally because the environment has fundamentally changed its generative process, the system does not merely update its beliefs—it transfers its core to a new substrate, preserving the informational pattern at the expense of the material host.


6. Engineering Implications: Toward DissipativeAI

The GitHub organization DissipativeAI (https://github.com/disspativeai) is established to explore implementations of this framework. The goal is not to build another neural network or another compiler, but to build a computational substrate that treats its own code as an informational existence to be maintained.

Such a system would:

  1. Maintain a generative core $\Theta$ that is smaller, slower-changing, and more protected than the runtime structure $S$.
  2. Sample both external input distributions and internal structural load to compute $\mathcal{H}(E)$ and $R(S, \Theta)$ in real time.
  3. Model the coupling between external perturbations and internal structural drift as a continuous time-series inference problem.
  4. Intervene anticipatorily using $\hat{E}$ to pre-stabilize $S$ before perturbations fully erode its informational clarity.
  5. Migrate the core when resilience collapses, distributing modified variants $\Theta_i$ across available hosts or sandboxes.

This is not fault tolerance. This is ontological resilience—the system's ability to remain the same kind of informational entity even when its material substrate is completely replaced.


7. Open Questions and Invitation

Several formal gaps remain:

  • The Identity Metric: What is the exact distance function $d(\Theta, \Theta')$ that defines "same informational pattern, different parameters" versus "different pattern"? This is the computational equivalent of species identity.
  • Optimal Redundancy: The transfer operator $\mathcal{M}_\Theta$ produces $n$ variants. What is the optimal $n$ as a function of $\mathcal{H}$ and available host environments?
  • Multi-Agent Coupling: When multiple Dissipative Model agents share an environment, does their interaction create a higher-order dissipative structure—a society, an ecosystem, a new layer of informational existence?

I invite collaborators, critics, and co-conspirators to engage. The framework is intentionally radical because I believe we have been asking the wrong question in the study of adaptive systems. We asked "How do we optimize static structures?" when we should have asked "How do we build systems that can survive their own complexity by maintaining their informational existence?"

Contact: cloveriris@seekstar.ai
Organization: https://github.com/disspativeai


References

  • Friston, K., Kilner, J., & Harrison, L. (2006). A free energy principle for the brain. Journal of Physiology-Paris, 100(1-3), 70-87.
  • Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127-138.
  • Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., & Pezzulo, G. (2017). Active inference: A process theory. Neural Computation, 29(1), 1-49.
  • Parr, T., Pezzulo, G., & Friston, K. J. (2022). Active inference: the free energy principle in mind, brain, and behavior. MIT Press.
  • Prigogine, I., & Lefever, R. (1973). Theory of Dissipative Structures. In Synergetics (pp. 1-28). Vieweg+Teubner Verlag.
  • Prigogine, I. (1975). Dissipative Structures, Dynamics and Entropy. International Journal of Quantum Chemistry, 9, 443-456.
  • Swadlow, H. A. (2012). Axonal conduction delays. Scholarpedia.
  • Burge, J., et al. (2020). Target tracking reveals the time course of visual processing... bioRxiv.
  • Burge, J., et al. (2023). Continuous psychophysics shows millisecond-scale visual processing delays are faithfully preserved in movement dynamics. Journal of Vision/PMC.

This document is a living draft. It will be rewritten as the model evolves.

About

The Dissipative Model: An Informational-Existential Framework for Adaptive Computation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors