Complete reference for all Extropy CLI commands, flags, and options.
extropy spec ──> extropy scenario ──> extropy persona ──> extropy sample ──> extropy network ──> extropy simulate ──> extropy results
All commands operate within a study folder — a directory containing study.db and scenario subdirectories. Commands auto-detect the study folder from the current working directory.
Study folder structure:
my-study/
├── study.db # Canonical data store (SQLite)
├── population.v1.yaml # Base population spec
├── scenario/
│ └── my-scenario/
│ ├── scenario.v1.yaml # Scenario spec
│ └── persona.v1.yaml # Persona config
└── results/
└── my-scenario/ # Simulation outputs
All commands support these global options:
| Flag | Description |
|---|---|
--version |
Show version and exit |
--cost |
Show cost summary after command completes |
--study PATH |
Study folder path (auto-detected from cwd if not specified) |
Generate a population spec from a natural language description.
# Create new study folder with population.v1.yaml
extropy spec "German surgeons" -o surgeons
# Create with custom name (surgeons/hospital-staff.v1.yaml)
extropy spec "German surgeons" -o surgeons/hospital-staff
# Iterate on existing (from within study folder)
cd surgeons && extropy spec "add income distribution"
# Creates population.v2.yaml
# Explicit file path
extropy spec "farmers" -o my-spec.yaml- Runs sufficiency check (may ask clarifications; in agent mode returns structured questions).
- Selects attributes (strategy, scope, dependencies, semantic metadata).
- Runs split hydration for distributions/formulas/modifiers.
- Binds constraints + computes dependency-safe sampling order.
- Builds and validates
PopulationSpec. - Saves versioned output YAML.
Stage ownership notes:
- Spec stage does not persist household config (household modeling is scenario-owned).
- Name generation is not part of spec generation; names are generated at sampling/runtime.
| Name | Type | Required | Description |
|---|---|---|---|
description |
string | yes | Natural language population description |
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--output |
-o |
path | Output path: study folder, folder/name, or explicit .yaml file | |
--yes |
-y |
flag | false | Skip confirmation prompts |
--answers |
string | JSON with pre-supplied clarification answers (for agent mode) | ||
--use-defaults |
flag | false | Use defaults for ambiguous values instead of prompting |
- If spec validation fails, CLI writes a versioned invalid artifact next to target output (
population.v1.yaml->population.v1.invalid.v1.yaml, then.v2,.v3, ...). - Command exits non-zero after writing the invalid artifact.
Create a scenario with scenario-specific attributes and simulation configuration.
The scenario command is essentially a mini spec builder — it discovers and researches attributes that are specific to this scenario but not in the base population spec. For example, a "vaccine adoption" scenario might add vaccine_hesitancy and prior_flu_shot attributes that wouldn't exist in a general population spec.
# Create new scenario
extropy scenario "AI diagnostic tool adoption" -o ai-adoption
# Pin population version
extropy scenario "vaccine mandate" -o vaccine @pop:v1
# Rebase existing scenario to new population
extropy scenario "rebase marker" -o ai-adoption --rebase @pop:v2- Runs sufficiency check — infers duration/type/unit/focus hints and asks clarifications if needed
- Discovers scenario-specific attributes — identifies extension attributes not already in base population
- Hydrates extension + household config — researches distributions and scenario household semantics
- Binds constraints — validates dependencies and sampling order for extension attrs
- Compiles scenario dynamics — builds event, exposure, interaction/spread, timeline, and outcomes
- Validates scenario contract — deterministic checks before save (base+extended refs, literals, channels, timeline, outcomes)
- Saves versioned artifact —
scenario/{name}/scenario.vN.yaml(or versioned.invalidon fail-hard)
- Sufficiency is intentionally lenient, but deterministic post-processing adds guardrails:
- explicit timeline markers (for example
week 1,month 0) force evolving mode - static scenarios must have an explicit timestep unit (or trigger a clarification question)
- explicit timeline markers (for example
- In agent mode, insufficiency returns structured questions with exit code
2. --use-defaultsretries sufficiency automatically using defaults from those clarification questions.
| Name | Type | Required | Description |
|---|---|---|---|
description |
string | yes | Scenario description (what event/situation to simulate) |
population_ref |
string | no | Population version reference: @pop:v1 or @pop:latest |
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--output |
-o |
string | required | Scenario name (creates scenario/{name}/scenario.v1.yaml) |
--rebase |
string | Rebase existing scenario to new population version (e.g. @pop:v2) |
||
--timeline |
string | auto |
Timeline mode: auto (LLM decides), static (single event), evolving (multi-event) |
|
--timestep-unit |
string | inferred | Override timestep unit: hour, day, week, month, year |
|
--max-timesteps |
int | inferred | Override simulation horizon | |
--use-defaults |
flag | false | Auto-answer sufficiency clarifications with defaults | |
--yes |
-y |
flag | false | Skip confirmation prompts |
The generated scenario.v1.yaml includes:
extended_attributes— Scenario-specific attributes with full distribution specs (same format as population attributes)event— Event definition (type, content, source, credibility, ambiguity, emotional valence)timeline— For evolving scenarios: subsequent events at different timestepsseed_exposure— Channels and rules for initial exposureinteraction— How agents interact about the eventspread— How information propagates through the networkoutcomes— What to measure from each agentsimulation— Timestep config, stopping conditions, convergence settingshousehold_config+agent_focus_mode— Scenario-owned household semantics for sample stagesampling_semantic_roles— Scenario-level semantic role mappings used by sampling/runtime checksidentity_dimensions(optional) — Identity activation hints consumed by simulation prompts
- If no scenario extension attributes are discovered, scenario creation hard-fails and writes a versioned JSON invalid artifact.
- If compile fails mid-pipeline, scenario creation hard-fails and writes a versioned JSON invalid artifact.
- If final scenario validation fails, CLI writes versioned YAML invalid artifact (
scenario.vN.invalid.vK.yaml) and exits non-zero.
Generate persona rendering configuration for a scenario.
# Generate for a scenario (auto-versions)
extropy persona -s ai-adoption
# Pin scenario version
extropy persona -s ai-adoption@v1
# Preview existing config
extropy persona -s ai-adoption --show- Resolves scenario and loads
scenario.vN.yaml. - Loads referenced base population and merges
extended_attributes. - Runs persona generation pipeline (structure, boolean/categorical/relative/concrete phrasings).
- Validates generated config against merged attributes (
validate_persona_config). - Saves versioned output YAML (
persona.vN.yaml).
Notes:
- If sampled agents already exist, persona generation computes
population_statsat generation time. - If not, stats can be backfilled later at simulation runtime.
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--scenario |
-s |
string | auto | Scenario name (auto-selects if only one exists) |
--output |
-o |
path | Output file (default: scenario/{name}/persona.vN.yaml) |
|
--preview/--no-preview |
flag | true | Reserved flag (currently not used as a separate generation gate) | |
--agent |
int | 0 | Which agent to use for preview | |
--yes |
-y |
flag | false | Skip confirmation prompts |
--show |
flag | false | Preview existing persona config without regenerating |
- If generation fails, CLI writes a versioned JSON invalid artifact and exits non-zero.
- If persona validation fails, CLI writes versioned YAML invalid artifact (
persona.vN.invalid.vK.yaml) and exits non-zero. extropy validate persona.vN.yaml(or.invalid) runs persona-specific validation against merged base+extended attributes.
Sample agents from a scenario's merged population spec.
extropy sample -s ai-adoption -n 500
extropy sample -s ai-adoption -n 1000 --seed 42 --report
extropy sample -n 500 # auto-selects scenario if only one exists- Resolves scenario and requires persona config pre-flight.
- Loads base population + scenario extension and builds merged spec.
- Recomputes merged sampling order via topological sort.
- Validates merged spec.
- Samples agents using scenario household config/focus/semantic roles.
- Runs deterministic post-sample rule-pack gate.
- Saves agents and run metadata to
study.db.
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--scenario |
-s |
string | auto | Scenario name (auto-selects if only one exists) |
--count |
-n |
int | required | Number of agents to sample |
--seed |
int | random | Random seed for reproducibility | |
--report |
-r |
flag | false | Show distribution summaries and stats |
--skip-validation |
flag | false | Skip validator errors | |
--strict-gates |
flag | false | Promote high-risk warnings and post-sample condition warnings to fail-hard |
Exit codes: 0 = Success, 1 = Validation error, 3 = File not found, 4 = Sampling error
Sampling process:
- Loads scenario's
base_populationspec - Merges with scenario's
extended_attributes - Recomputes merged dependency order (topological sort)
- Validates the merged spec
- Samples agents
- Applies rule-pack gate (
impossible/implausible) - Saves to
study.dbkeyed byscenario_id
- Missing persona config blocks sampling pre-flight.
- Merged-order cycles or merged-spec validation failures write versioned JSON invalid artifacts and exit non-zero.
- Post-sampling gate failure writes versioned JSON invalid artifact (
sample.invalid.vN.json) and exits non-zero.
Generate a social network from sampled agents.
extropy network -s ai-adoption # Uses LLM-generated config (default)
extropy network -s ai-adoption --avg-degree 15 --seed 42 # Custom degree and seed
extropy network -s ai-adoption --no-generate-config # Flat network, no similarity structure
extropy network -s ai-adoption -c custom-network.yaml # Load custom config- Resolves study + scenario and verifies sampled agents exist.
- Loads scenario + base population and builds merged attribute context (base + extension) for config generation.
- Resolves config in this order:
- explicit
--network-config - latest auto-detected
scenario/<name>/*.network-config.yaml - LLM-generated config (
--generate-config, default) - empty config fallback (
--no-generate-config)
- explicit
- Applies CLI overrides, quality profile defaults, and resource auto-tuning.
- Generates network (with metrics unless
--no-metrics). - Evaluates topology gate and persists result to
study.db. - Optionally exports a non-canonical JSON copy with
--output.
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--scenario |
-s |
string | auto | Scenario name (auto-selects if only one exists) |
--output |
-o |
path | Optional JSON export path (non-canonical) | |
--network-config |
-c |
path | Custom network config YAML file | |
--save-config |
path | Save the (generated or loaded) network config to YAML | ||
--generate-config |
flag | true | Generate network config via LLM from population spec (default: enabled) | |
--avg-degree |
float | unset | Override target average degree (otherwise keep config value) | |
--rewire-prob |
float | unset | Override rewiring probability (otherwise keep config value) | |
--seed |
int | unset | Override config seed (if unset, generator picks seed) | |
--validate |
flag | false | Print validation metrics | |
--no-metrics |
flag | false | Skip computing node metrics (faster) |
| Flag | Type | Default | Description |
|---|---|---|---|
--quality-profile |
string | balanced |
Quality profile: fast, balanced, strict |
--candidate-mode |
string | blocked |
Similarity candidate mode: exact, blocked |
--candidate-pool-multiplier |
float | 12.0 | Blocked mode candidate pool size as multiple of avg_degree |
--block-attr |
string (repeatable) | auto | Blocking attribute(s). If omitted, auto-selects top attributes |
--similarity-workers |
int | 0 | Worker processes for similarity computation (0 = auto) |
--similarity-chunk-size |
int | 64 | Row chunk size for similarity worker tasks |
| Flag | Type | Default | Description |
|---|---|---|---|
--checkpoint |
path | DB path for checkpointing (must resolve to the same file as study.db) |
|
--resume |
flag | false | Resume similarity and calibration checkpoints from study.db |
--checkpoint-every |
int | 250 | Write checkpoint every N processed similarity rows |
| Flag | Type | Default | Description |
|---|---|---|---|
--resource-mode |
string | auto |
Resource tuning mode: auto, manual |
--safe-auto-workers/--unsafe-auto-workers |
flag | true | Conservative auto tuning for laptops/VMs |
--max-memory-gb |
float | Optional memory budget cap for auto resource tuning |
- Missing sampled agents blocks network generation pre-flight.
- Invalid option values (
quality_profile,candidate_mode,topology_gate, checkpoint mismatch) exit non-zero. - Strict topology-gate failures (
quality.accepted=falsewith strict gate andN>=50) exit non-zero:- by default, command saves a quarantined network artifact and does not report canonical success,
- if quarantine is disabled via advanced flag, command still exits non-zero.
- Generated configs can be auto-saved into
scenario/<name>/network-config.seed*.yaml. - Use
extropy query network-status <network_run_id>to inspect calibration/progress records.
Run a simulation from a scenario spec.
extropy simulate -s ai-adoption
extropy simulate -s ai-adoption --seed 42 --strong anthropic/claude-sonnet-4-6
extropy simulate -s ai-adoption --fidelity high
extropy simulate -s asi-announcement --early-convergence off- Resolves study folder and scenario.
- Pre-flight checks required upstream artifacts:
- sampled agents exist for scenario,
- network edges exist for scenario,
- persona config exists for scenario.
- Validates runtime flags (
--resume/--run-id,--resource-mode,--early-convergence). - Resolves effective models/rate limits from CLI overrides then config defaults.
- Runs simulation loop:
- seed + timeline + network exposures,
- chunked reasoning (two-pass by default, merged with
--merged-pass) with per-timestep reasoning budget, - medium/high conversation interleaving with novelty + per-timestep conversation budget,
- timestep summary + stopping checks.
- Persists run state to canonical
study.dband writes results artifacts toresults/{scenario}/(or--output).
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--scenario |
-s |
string | auto | Scenario name (auto-selects if only one exists) |
--output |
-o |
path | results/{scenario}/ |
Output results directory |
--seed |
int | random | Random seed for reproducibility | |
--fidelity |
-f |
string | medium |
Fidelity level: low, medium, high |
--merged-pass |
flag | false | Use single merged reasoning pass instead of two-pass (experimental) | |
--threshold |
-t |
int | 3 | Multi-touch threshold for re-reasoning |
--early-convergence |
string | auto |
Override convergence auto-stop policy: auto, on, off |
|
--chunk-size |
int | 50 | Agents per reasoning chunk for checkpointing |
--early-convergence controls whether convergence/quiescence auto-stops can end a run early.
auto(default): use scenario YAML value (simulation.allow_early_convergence), else engine auto-rule.on: force-enable early convergence auto-stops for this run.off: force-disable early convergence auto-stops for this run.
Precedence:
- CLI flag (
on/off) wins. - Scenario YAML (
simulation.allow_early_convergence) is used when CLI isauto. - If both are unset (
auto+ YAMLnull), engine auto-rule applies:convergence/quiescence auto-stop only when no future timeline events remain.
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--strong |
string | config | Strong model for Pass 1 (provider/model) |
|
--fast |
string | config | Fast model for Pass 2 (provider/model) |
| Flag | Type | Default | Description |
|---|---|---|---|
--rate-tier |
int | config | Provider rate limit tier (1-4) |
--rpm-override |
int | Override requests per minute | |
--tpm-override |
int | Override tokens per minute |
| Flag | Type | Default | Description |
|---|---|---|---|
--run-id |
string | auto | Explicit run id (required with --resume) |
--resume |
flag | false | Resume an existing run from study DB checkpoints |
--checkpoint-every-chunks |
int | 1 | Persist simulation chunk checkpoints every N chunks |
| Flag | Type | Default | Description |
|---|---|---|---|
--writer-queue-size |
int | 256 | Max reasoning chunks buffered before DB writer backpressure |
--db-write-batch-size |
int | 100 | Number of chunks applied per DB writer transaction |
--retention-lite |
flag | false | Reduce retained payload volume (drops full raw reasoning text) |
| Flag | Type | Default | Description |
|---|---|---|---|
--resource-mode |
string | auto |
Resource tuning mode: auto, manual |
--safe-auto-workers/--unsafe-auto-workers |
flag | true | Conservative auto tuning for laptop/VM environments |
--max-memory-gb |
float | Optional memory budget cap for auto resource tuning |
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--quiet |
-q |
flag | false | Suppress progress output |
--verbose |
-v |
flag | false | Show detailed logs |
--debug |
flag | false | Show debug-level logs (very verbose) |
--resumerequires an explicit--run-id.- Scenario lookup is scenario-name first, with legacy id fallback for older studies.
--early-convergence autouses scenario YAML value when set; otherwise runtime auto-rule applies (do not early-stop while future timeline events remain).lowfidelity skips conversations;mediumandhighenable conversations with stricter per-agent caps at lower fidelity.--retention-litedrops full raw reasoning payload retention to reduce DB/storage volume.- Timeline events without explicit
exposure_rulesuse bounded fallback filtering (not full-seed-rule replay). extremere-reasoning is bounded to a high-salience subset, not all aware agents.
- Missing study folder/scenario/persona/agents/network fails pre-flight and exits non-zero.
- Invalid flag values (for example bad
--resource-modeor--early-convergence) fail fast and exit non-zero. - Runtime exceptions mark the simulation run as
failedinsimulation_runsand return non-zero. - Successful completion updates run status to
completedorstopped(when a stop condition ends early).
Display simulation results. Supports subcommands for different views.
extropy results # summary (default)
extropy results timeline # timestep progression
extropy results segment income # segment by attribute
extropy results agent agent_042 # single agent details| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--scenario |
-s |
string | Filter by scenario | |
--run-id |
string | latest | Simulation run ID |
Default view when no subcommand is given. Shows agent count, awareness rate, and position distribution.
Shows timestep-by-timestep progression including new exposures, agents reasoned, shares, and exposure rate.
| Argument | Type | Required | Description |
|---|---|---|---|
attribute |
string | yes | Agent attribute to segment by |
| Argument | Type | Required | Description |
|---|---|---|---|
agent_id |
string | yes | Agent ID to inspect |
Output modes:
- Human mode (
cli.mode: human): Rich terminal formatting - Agent mode (
cli.mode: agent): Structured JSON output
Query and export raw data from the study database.
Dump agent attributes.
extropy query agents # print to stdout (uses latest run's scenario)
extropy query agents --to agents.jsonl # write JSONL file
extropy query agents -s congestion-tax # explicit scenario| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--to |
path | Write JSONL to file | ||
--scenario |
-s |
string | auto | Scenario name (resolved from latest run if not specified) |
--run-id |
string | Simulation run ID (used to resolve scenario if not specified) |
Dump network edges.
extropy query edges --to edges.jsonl
extropy query edges -s congestion-tax --to edges.jsonl| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--to |
path | Write JSONL to file | ||
--scenario |
-s |
string | auto | Scenario name (resolved from latest run if not specified) |
--run-id |
string | Simulation run ID (used to resolve scenario if not specified) |
Dump agent states for a simulation run.
extropy query states --to states.jsonl
extropy query states --run-id abc123 --to states.jsonl| Flag | Type | Default | Description |
|---|---|---|---|
--run-id |
string | latest | Simulation run ID |
--to |
path | Write JSONL to file |
Show study entity counts (agents, edges, simulation states, timesteps, events).
extropy query summary
extropy query summary -s congestion-tax| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--run-id |
string | latest | Simulation run ID | |
--scenario |
-s |
string | auto | Scenario name (resolved from latest run if not specified) |
Show network statistics (edge count, average weight, top-degree nodes).
extropy query network
extropy query network -s congestion-tax| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--scenario |
-s |
string | auto | Scenario name (resolved from latest run if not specified) |
--run-id |
string | Simulation run ID (used to resolve scenario if not specified) | ||
--top |
int | 10 | Number of top-degree nodes to show |
Show network calibration progress.
extropy query network-status <run-id>| Argument | Type | Required | Description |
|---|---|---|---|
network_run_id |
string | yes | Network generation run ID |
Run a read-only SQL query against the study database.
extropy query sql "SELECT count(*) FROM agents"
extropy query sql "SELECT * FROM agent_states LIMIT 10" --format json| Argument | Type | Required | Description |
|---|---|---|---|
sql |
string | yes | Read-only SQL statement |
| Flag | Type | Default | Description |
|---|---|---|---|
--limit |
int | 1000 | Max rows to return |
--format |
string | table |
Output format: table, json, jsonl |
Only SELECT, WITH, and EXPLAIN queries are allowed.
Validate a population or scenario spec.
extropy validate population.v1.yaml # Population spec
extropy validate scenario/congestion-tax/scenario.v1.yaml # Versioned scenario spec
extropy validate my-scenario.scenario.yaml # Legacy scenario spec
extropy validate population.v1.yaml --strict # Treat warnings as errors| Name | Type | Required | Description |
|---|---|---|---|
spec_file |
path | yes | Spec file to validate |
| Flag | Type | Default | Description |
|---|---|---|---|
--strict |
flag | false | Treat warnings as errors (population specs only) |
Auto-detects file type based on naming:
*.scenario.yamlor*.scenario.yml→ scenario spec validationscenario.yamlorscenario.yml→ scenario spec validationscenario.v{N}.yamlorscenario.v{N}.yml→ scenario spec validation (versioned)- Other
*.yamlfiles → population spec validation
Supports both flows for scenario validation:
- New flow:
meta.base_populationreferences versioned population (e.g.,population.v2) - Legacy flow:
meta.population_spec+meta.study_dbfile paths
Exit codes: 0 = Success (valid spec), 1 = Validation error (invalid spec), 3 = File not found
View and modify configuration.
extropy config show
extropy config set models.fast openai/gpt-5-mini
extropy config set simulation.strong anthropic/claude-sonnet-4-6
extropy config set simulation.strong openrouter/anthropic/claude-sonnet-4-6
extropy config reset| Name | Type | Description |
|---|---|---|
action |
string | Action: show, set, reset |
key |
string | Config key (for set) |
value |
string | Value to set (for set) |
| Key | Description |
|---|---|
models.fast |
Fast model for pipeline (provider/model) |
models.strong |
Strong model for pipeline (provider/model) |
simulation.fast |
Fast model for simulation Pass 2 |
simulation.strong |
Strong model for simulation Pass 1 |
simulation.max_concurrent |
Max concurrent LLM calls |
simulation.rate_tier |
Rate limit tier (1-4) |
simulation.rpm_override |
RPM override |
simulation.tpm_override |
TPM override |
cli.mode |
CLI mode: human (interactive) or agent (JSON output) |
show_cost |
Show cost tracking |
providers.<name>.base_url |
Custom provider base URL |
providers.<name>.api_key_env |
Custom provider API key env var |
Interactive chat with simulated agents. Auto-detects study folder from current working directory.
cd austin # study folder
extropy chat| Flag | Type | Default | Description |
|---|---|---|---|
--run-id |
string | latest | Simulation run ID |
--agent-id |
string | auto | Agent ID (auto-selects first agent if not specified) |
--session-id |
string | auto | Chat session ID |
REPL commands: /context, /timeline <n>, /history, /exit
Show recent runs and sample agents so users can pick chat targets quickly.
cd austin && extropy chat list| Flag | Type | Default | Description |
|---|---|---|---|
--limit-runs |
int | 10 | Number of recent runs to list |
--agents-per-run |
int | 5 | Number of sample agent IDs per run |
Non-interactive API for automation.
cd austin && extropy chat ask --prompt "What changed your mind?"| Flag | Type | Default | Description |
|---|---|---|---|
--run-id |
string | latest | Simulation run ID |
--agent-id |
string | auto | Agent ID (auto-selects first agent if not specified) |
--prompt |
string | required | Question to ask |
--session-id |
string | auto | Chat session ID |
Output modes:
- Human mode (
cli.mode: human): Rich terminal formatting - Agent mode (
cli.mode: agent): Structured JSON output
| Variable | Description |
|---|---|
OPENAI_API_KEY |
OpenAI API key |
ANTHROPIC_API_KEY |
Anthropic (Claude) API key |
AZURE_API_KEY |
Azure API key (preferred) |
AZURE_OPENAI_API_KEY |
Azure API key (legacy alias) |
OPENROUTER_API_KEY |
OpenRouter API key |
DEEPSEEK_API_KEY |
DeepSeek API key |
| Variable | Default | Description |
|---|---|---|
AZURE_ENDPOINT |
Azure endpoint URL (preferred) | |
AZURE_OPENAI_ENDPOINT |
Azure endpoint URL (legacy alias) |
# Create study folder with population spec
extropy spec "Austin TX commuters" -o my-study
cd my-study
# Create scenario and persona config
extropy scenario "Response to $15/day congestion tax" -o congestion-tax
extropy persona -s congestion-tax -y
# Sample agents and generate network (LLM config by default)
extropy sample -s congestion-tax -n 500 --seed 42
extropy network -s congestion-tax --seed 42
# Run simulation
extropy simulate -s congestion-tax --seed 42
# View results
extropy results
extropy results timeline
extropy results segment income
extropy results agent agent_042
# Query data
extropy query agents --to agents.jsonl
extropy query states --to states.jsonl
extropy query summary
extropy query network
extropy query sql "SELECT count(*) FROM agents"
# Validate specs
extropy validate population.v1.yaml # Population spec
extropy validate scenario/congestion-tax/scenario.v1.yaml # Versioned scenario
# Config
extropy config show
extropy config set simulation.strong anthropic/claude-sonnet-4-6
extropy config set cli.mode agent # for AI harnesses
extropy config set cli.mode human # for terminal users (default)extropy scenario "ASI announcement with escalating social/economic impacts over 6 months" \
--timeline evolving \
--timestep-unit month \
--max-timesteps 6 \
-o asi-announcement -y
extropy persona -s asi-announcement -y
extropy sample -s asi-announcement -n 5000 --seed 42
extropy network -s asi-announcement --seed 42
extropy simulate -s asi-announcement --seed 42 --fidelity high --early-convergence offextropy scenario "US strikes on Iran with 12-week escalation and partial de-escalation timeline" \
--timeline evolving \
--timestep-unit week \
--max-timesteps 12 \
-o iran-strikes -y
extropy persona -s iran-strikes -y
extropy sample -s iran-strikes -n 5000 --seed 42
extropy network -s iran-strikes --seed 42
extropy simulate -s iran-strikes --seed 42 --fidelity high --early-convergence off