Skip to content

CRSM (Continuous Reasoning State Model): An asynchronous "System 2" architecture that implements Hierarchical State Sovereignty within a Mamba backbone. Unlike traditional search wrappers, CRSM uses Forward-Projected Planning and Sparse-Gated Injection to steer latent manifolds in real-time, decoupling strategic reasoning from token generation.

License

Notifications You must be signed in to change notification settings

Pomilon-Intelligence-Lab/CRSM

Repository files navigation

CRSM: Continuous Reasoning State Model

⚠️ STATUS: EXPERIMENTAL PROTOTYPE This is a research experiment exploring whether a continuous background planner can guide a language model without pausing generation. While the core "Gated State Injection" mathematics have been verified for stability, the model is currently a proof-of-concept.

Exploring Asynchronous "System 2" Reasoning with Mamba

Standard Transformers typically face a latency trade-off: to perform "System 2" reasoning (deep planning), they must generate intermediate tokens ("System 1" output), which increases latency and computational cost.

CRSM explores an alternative approach: decoupling reasoning from generation. It combines a Mamba backbone (efficient linear-time memory) with an Asynchronous MCTS planner.

🎯 ARC-AGI Focus

The project is currently optimized for benchmarking on ARC-AGI, targeting Nano-scale implementations (100k - 500k parameters). The modular architecture allows for rapid iteration on different reasoning strategies and task-specific logic (e.g., Grid-based spatial reasoning).


📚 Documentation Hub


💡 Key Architectural Experiments

1. Sparse-Gated Hierarchical Injection

To maintain stability while allowing deep planning, CRSM uses Sparse-Gated Injection. Each layer in the Mamba hierarchy is treated as a sovereign entity with its own "gate." The planner injects state updates independently into each layer based on its specific confidence. $$h_{i,t} \leftarrow (1 - \alpha_{i}) \cdot h_{i,t} + \alpha_{i} \cdot h_{i,target}$$ This allows the planner to aggressively update high-level strategy layers (Layer 24) while leaving low-level sensory features (Layer 1) untouched.

2. Forward-Projected Planning (Alignment)

Planning takes time. In an asynchronous system, by the time MCTS finds a better state $S_t$, the generator has already moved to $S_{t+3}$. CRSM solves this via Forward Projection: the planner uses its internal dynamics model to "fast-forward" the current state to the target position before starting the search. Updates are then held in a Targeted Delta Buffer and applied at the exact micro-second they align with the generation loop.

3. Multi-Headed Consensus & Uncertainty

Instead of a single "Value Head," CRSM employs a Multi-Headed Value Critic (MV-Critic), one for every layer. The planner's utility score is a weighted consensus of these heads. If the layers disagree (high variance in values), the system applies an Uncertainty Penalty, favoring reasoning paths that are stable across all levels of abstraction.


⚡ Quick Start

Installation

git clone https://github.com/Pomilon-Intelligence-Lab/CRSM.git
cd CRSM
pip install -e .

Autonomous Inference

Run the model with the "Thinking" loop active:

import torch
import asyncio
from crsm.core import CRSMModel, CRSMConfig

async def main():
    # 1. Load Model (Nano configuration)
    config = CRSMConfig(
        vocab_size=1024, 
        hidden_size=256, 
        num_hidden_layers=4,
        injection_rate=0.05
    )
    model = CRSMModel(config).cuda()
    
    # 2. Generate with Asynchronous Deliberation
    prompt = torch.tensor([[10, 20, 30]]).cuda()
    
    output = await model.think_and_generate(
        prompt, 
        max_length=50, 
        use_deliberation=True,
        deliberation_lag=3
    )
    print("Generated:", output)

if __name__ == "__main__":
    asyncio.run(main())

Usage

Unified Benchmarking & Validation

The central tool for verifying both functional and operational validity is scripts/eval/benchmark.py. It automates backbone training, subconscious reasoning training, and ablation studies.

1. Synthetic Sanity Check (Fast)

Verify the architecture can learn identity and simple translations:

python scripts/eval/benchmark.py --config configs/arc_nano.yaml --type sanity

2. Official ARC-AGI Benchmark

Run the full pipeline on the official fchollet/ARC dataset:

python scripts/eval/benchmark.py --config configs/arc_official.yaml --type official

Understanding Operational Proofs

The benchmark reports two critical signals of "Working Reasoning":

  1. Discrimination Accuracy: Measures if the Multi-Headed Value Critic can distinguish correct states from noisy ones. An accuracy > 50% proves the subconscious is learning to judge.
  2. MCTS Improvement Delta: The performance gain of MCTS over Greedy search. A positive delta proves the search engine is operationally steering the model towards better solutions.

Training

To train a model on general tasks using the modular trainer:

python run.py --task lm --config configs/training_config.yaml

🧪 Verification

The repository includes a test suite to verify the stability of the state injection math and the functionality of the components.

  • Architecture Stability: tests/test_architecture_stability.py (Verifies Gated Injection properties).
  • Capabilities: tests/verify_capabilities.py (Basic capability checks).

Run the core stability verification:

python tests/test_architecture_stability.py

🧬 Project Origins & Transparency

This project follows a "Centaur" workflow—combining human direction and engineering with AI-assisted research.

The Spark: The core concept—replacing linear token-based planning with a continuous "thinking module"—originated from a research session I conducted with Gemini 2.5 Flash.

Original Prompt:

"Help me research ways to develop the next SOTA open-source mode. My idea is that instead of relying on architectures like Transformers, which just predict linearly the next token in a sequence and thinks in the tokens that it generates... we could develop a new architecture that instead includes an internal reasoning component or a thinking module..."

Development Process:

  • Foundational Research: The initial feasibility study and architectural concepts were generated by AI and are preserved in docs/FOUNDATIONAL_RESEARCH.md.
  • Implementation: I utilized LLMs (ChatGPT, Claude, Gemini) to assist in drafting complex component code.
  • Verification & Engineering: I personally handled the system integration, testing, debugging, and critical mathematical verification (such as the "Gated Injection" solution).

I believe this transparency is important to accurately represent the collaborative nature of modern experimental coding.


🧠 Inspirations & Acknowledgements

This project is an experimental synthesis of existing breakthrough research. It attempts to combine these distinct ideas into a unified architecture. I claim no credit for the foundational concepts, only for the specific implementation of their integration (CRSM).

Core Theoretical Foundations

  • MuZero (Schrittwieser et al., DeepMind): The primary inspiration for performing Monte Carlo Tree Search (MCTS) entirely within a learned latent space, without decoding back to observations. CRSM adapts this "planning in latent space" concept to the continuous state of a language model.
  • Mamba (Gu & Dao): The efficient State Space Model (SSM) backbone is the engine of this architecture. Its fixed-size, linear-time state enables the direct state manipulation and injection that would be computationally prohibitive with the KV-cache of Transformers.
  • Tree of Thoughts (Yao et al.) & Chain of Thought (Wei et al.): The inspiration for treating reasoning as a search problem over a space of intermediate steps. CRSM attempts to make this search internal and continuous rather than external and discrete.

Cognitive Frameworks

  • System 1 & System 2 (Daniel Kahneman): The guiding conceptual framework.
    • System 1 (Intuition): Represented by the Mamba backbone (fast, heuristic generation).
    • System 2 (Deliberation): Represented by the Asynchronous MCTS planner (slow, logical search).
  • Global Workspace Theory (Baars): The idea of a "working memory" where conscious processing occurs inspired the design of the Latent State as a shared workspace that both the planner and generator can access and modify.

Emerging Research

  • Coconut (Chain of Continuous Thought): A parallel line of research exploring reasoning in continuous latent space. While Coconut feeds the last hidden state back as input to the next step, CRSM modifies the internal state directly in real-time during the generation process.

Architectural Components

  • JEPA (LeCun): The design of the Latent Dynamics Model is heavily influenced by Joint Embedding Predictive Architectures—learning to predict the representation of the next state rather than the pixel/token details.
  • World Models / Dreamer (Ha & Schmidhuber, Hafner et al.): The concept of learning a compact model of the environment to simulate futures ("dreaming") for planning is directly implemented in CRSM's dynamics distillation pipeline.

Related Mechanics

  • State Delta Communication (Tang et al.): While CRSM uses "state deltas" for intra-agent self-correction (Planner → Backbone), Tang et al. explore a similar mechanic for inter-agent communication, passing "state deltas" between models to convey reasoning dynamics that are lost in discrete token communication.

Related Methodologies

  • Representation Engineering (RepE) (Zou et al.): The concept of "Top-Down" control of model behavior by manipulating the latent space is central to CRSM. Our "Gated Injection" can be viewed as a control-theoretic application of RepE, where the control vector is dynamically generated by the planner rather than a static concept vector.
  • Reasoning via Planning (RAP) (Hao et al.) & AlphaLLM (Tencent AI Lab): These works pioneered the integration of MCTS with Large Language Models to enable self-improvement and strategic planning. CRSM builds on this by moving the planning process into the asynchronous and continuous domain.
  • Plug and Play Language Models (PPLM) (Dathathri et al.) & Activation Addition (Turner et al.): These works established the foundation for steering model generation by modifying hidden states (via gradients or vector addition). CRSM extends this by using a dynamic planner to generate the steering vectors in real-time, rather than using static vectors or classifiers.
  • RLHF (Christiano et al. / OpenAI): The methodology of training a separate Value Head to estimate the utility of a language model's state is adapted directly from the foundational work on Reinforcement Learning from Human Feedback.

Mathematical & Engineering Parallels

  • Speculative Decoding (Leviathan et al.): The "draft-then-verify" computational pattern in speculative decoding shares DNA with CRSM's asynchronous design. In CRSM, the "dynamics model" acts as a latent drafter, while the MCTS planner acts as a verifier/improver running in parallel.
  • Polyak Averaging (Lillicrap et al.): The Gated Injection formula ($h_{new} = (1-\tau)h + \tau h_{target}$) is mathematically identical to the "soft target updates" used in DDPG and other RL algorithms. We apply this standard control-theory technique to maintain stability in the language model's latent manifold.
  • Quiet-STaR (Zelikman et al.): This work explores generating "internal thoughts" at every token step to improve reasoning. CRSM shares this goal but seeks to make these thoughts continuous and asynchronous rather than discrete and interleaved.

I am deeply grateful to the researchers behind these works for sharing their code and insights with the open-source community.


Reference (If you find this useful)

@software{crsm2025,
  title = {CRSM: Continuous Reasoning State Model},
  author = {Pomilon},
  year = {2025},
  url = {https://github.com/Pomilon-Intelligence-Lab/CRSM}
}

License

MIT License.

About

CRSM (Continuous Reasoning State Model): An asynchronous "System 2" architecture that implements Hierarchical State Sovereignty within a Mamba backbone. Unlike traditional search wrappers, CRSM uses Forward-Projected Planning and Sparse-Gated Injection to steer latent manifolds in real-time, decoupling strategic reasoning from token generation.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published