This page gives an end-to-end path: environment preparation -> initialization -> starting services -> resource setup -> running the demo. For more protocol and implementation details, see the L0/L1 documentation.
The current repository target is the final V1R4 release of the v1 line. If you are upgrading from an older v1r3 deployment, read the release note and migration guide in ../05_operations/ first.
- The project currently supports Linux/macOS path semantics more completely. If you use Windows, it is recommended to run this guide under WSL2 or Docker to avoid path and process-detection compatibility issues.
- Paths referenced in sandbox/tooling config such as
/tmp/skills-local,/var/lib/skills-cache, and/proc/<pid>/cmdlinemay require additional adaptation on native Windows.
If this is your first time pulling the repository from GitHub, it is recommended to clone recursively to include the
card-box-cgsubmodule:git clone --recursive https://github.com/Intelligent-Internet/CommonGround.gitIf you already cloned without submodule contents, run from the repository root:git submodule update --init --recursive
- Python 3.13+ (repo baseline is based on 3.13)
uv- Postgres (recommended 15+)
- NATS (2.10+, with JetStream enabled)
uv syncdocker compose up -d nats postgresBy default:
judgeuses[judge].modelinconfig.toml; the sample currently sets it togemini/gemini-2.5-flash.mock_searchusesMOCK_SEARCH_LLM_MODEL/MOCK_SEARCH_LLM_PROVIDERfromservices.tools.mock_search; the sample currently sets them togemini/gemini-2.5-flash+gemini.
If you do not have GEMINI_API_KEY, switch to another model before starting:
# switch only Judge
export CG__JUDGE__MODEL="gpt-5-mini" # or moonshot/kimi-k2.5
export OPENAI_API_KEY="..."
# switch only mock_search
export MOCK_SEARCH_LLM_PROVIDER="openai"
export MOCK_SEARCH_LLM_MODEL="gpt-5-mini" # or moonshot/kimi-k2.5
export OPENAI_API_KEY="..."
export MOONSHOT_API_KEY="..."You can also edit config.toml directly: [judge].model, [tools.mock_search].llm_model, [tools.mock_search].llm_provider.
Note:
CG__SECTION__KEY(such asCG__JUDGE__MODEL) is synchronized withconfig.toml, and environment variables take precedence.
cp config.toml.sample config.tomlEdit [protocol], [nats], and [cardbox] in config.toml as needed, and set the DB DSN.
Important:
[protocol].versionmust bev1r4(and match your deployed protocol). If missing or incorrect, NATS subjects may not match and Worker/PMO may not receive messages.
Port note: default docker-compose ports are NATS 4222 and Postgres 5432; the repository example DSN may use 5433, so rely on your local
config.toml.
Local override note: if
4222is already occupied on your machine, either update[nats].serversinconfig.tomlor exportNATS_SERVERS=nats://127.0.0.1:4223(or another local port) before running services/examples.
Note: this operation rebuilds
resource/staterelated tables and also clears CardBox data (includingcards,card_boxes, etc.).
PG_DSN="postgresql://postgres:postgres@localhost:5433/cardbox" uv run -m scripts.setup.reset_db
uv run -m scripts.setup.seedIf your local PostgreSQL is listening on 5432, use:
PG_DSN="postgresql://postgres:postgres@localhost:5432/cardbox" uv run -m scripts.setup.reset_db
uv run -m scripts.setup.seedDemo 2/3 depends on management API and resource setup; starting API in parallel is recommended.
uv run -m services.pmo.serviceuv run -m services.agent_worker.loopuv run -m services.apiuv run -m services.tools.mock_searchuv run -m services.ui_worker.loop💡 Note: Artifact and Skills APIs depend on Google Cloud Storage (GCS). If
[gcs]is not configured inconfig.tomlor valid credentials are missing, the system will gracefully degrade and disable related capabilities such as/skills:uploadand/artifacts:upload. If you only want to try the core Agent orchestration flow, you can ignore this limitation for now.
curl -sS -X POST http://127.0.0.1:8099/projects \
-H 'Content-Type: application/json' \
-d '{"project_id":"proj_demo_01","title":"Demo","owner_id":"user_seed","bootstrap":true}'
curl -sS -X POST http://127.0.0.1:8099/projects/proj_demo_01/profiles \
-F file=@examples/profiles/associate_search.yaml
curl -sS -X POST http://127.0.0.1:8099/projects/proj_demo_01/profiles \
-F file=@examples/profiles/principal_planner_fullflow.yaml
curl -sS -X POST http://127.0.0.1:8099/projects/proj_demo_01/profiles \
-F file=@examples/profiles/chat_assistant.yaml
curl -sS -X POST http://127.0.0.1:8099/projects/proj_demo_01/tools \
-F file=@examples/tools/web_search_tool.yamlexamples/quickstarts/demo_principal_fullflow.py enables bootstrap and profile/tool uploads by default; use --no-ensure-resources to disable it.
uv run -m examples.quickstarts.demo_simple_principal_stream \
--project proj_demo_01 \
--channel public \
--agent demo_stream_01 \
--question "Hello"uv run -m examples.quickstarts.demo_principal_fullflow \
--project proj_demo_01 \
--channel public \
--profile-name Principal_Planner_FullFlow \
"help me to do a research on k8s"Start the deterministic tool service first:
uv run -m services.tools.word_countThen run the online demo (with input text file):
uv run -m examples.quickstarts.demo_fork_join_word_count \
--project proj_demo_01 \
--channel public \
--text-file /path/to/input.txtcurl -sS -X POST http://127.0.0.1:8099/projects/proj_demo_01/agents \
-H 'content-type: application/json' \
-d '{
"agent_id":"ui_user_demo",
"profile_name":"UI_Actor_Profile",
"worker_target":"ui_worker",
"tags":["ui"],
"display_name":"UI Session",
"owner_agent_id":"user_demo",
"metadata":{"is_ui_agent":true},
"init_state":true,
"channel_id":"public"
}'
curl -sS -X POST http://127.0.0.1:8099/projects/proj_demo_01/agents \
-H 'content-type: application/json' \
-d '{
"agent_id":"chat_agent_demo",
"profile_name":"Chat_Assistant",
"worker_target":"worker_generic",
"tags":["partner"],
"display_name":"Chat Agent",
"owner_agent_id":"user_demo",
"init_state":true,
"channel_id":"public"
}'
uv run -m examples.quickstarts.demo_ui_action \
--project proj_demo_01 \
--channel public \
--ui-agent-id ui_user_demo \
--chat-agent-id chat_agent_demonats sub "cg.v1r4.proj_demo_01.public.str.agent.*.chunk"uv run -m scripts.admin.inspect_turn --project proj_demo_01 --agent-id demo_stream_01When you need to implement "task decomposition -> parallel execution -> aggregation writeback" inside Principal, refer to docs/EN/03_kernel_l1/batch_manager.md.
CG_CONFIG_TOML: specify the config file path (default./config.toml).CG__SECTION__KEY: config override (supports JSON values, highest precedence). Example:CG__NATS__SERVERS='["nats://nats:4222"]'.PG_DSN: override[cardbox].postgres_dsn(compatible with scripts/tools services).NATS_SERVERS: override[nats].servers(comma-separated).API_URL/PROJECT_ID: convenience settings for seed scripts.CG_SENDER: NATS header metadata.CG__PROTOCOL__VERSION: override protocol version (equivalent to[protocol].versioninconfig.toml).GEMINI_API_KEY: API key for the default Gemini workflow (or equivalent provider key usage).CG__JUDGE__MODEL: overrideconfig.toml[judge].model(for example:gpt-5-mini,moonshot/kimi-k2.5).MOCK_SEARCH_LLM_MODEL: overridemock_searchLLM model (for example:gpt-5-mini,moonshot/kimi-k2.5).MOCK_SEARCH_LLM_PROVIDER: overridemock_searchprovider (for example:openai,moonshot).OPENAI_API_KEY/MOONSHOT_API_KEY: API keys for corresponding providers.MOCK_SEARCH_DEBUG: enable debug output forservices.tools.mock_search(1/true/yes/on).- OpenAPI (Jina) tool key: injected by default through
JINA_API_KEY(or viaoptions.jina.auth_envto specify an env-var name), do not write this into cards/DB/logs.
Advanced: if you want to define provider and model routing directly in a Profile, see the section "Switching LLM providers (LiteLLM support)" in
docs/EN/02_building_agents/defining_profiles.md.
Note: by default,
config.tomlis read first; environment variables are used to override config or provide runtime keys for scripts/tools services.