PublicProviderConf is a TypeScript CLI and library that aggregates the canonical models.dev catalog together with several custom provider integrations (PPInfra, TokenFlux, Groq, Qiniu-hosted snapshots, and others). The tool normalizes capabilities, fills in missing metadata, and emits standardized JSON payloads that downstream apps can consume without bespoke adapters.
- Unified JSON schema for every provider with consistent capability flags
- Concurrent fetcher pipeline that merges live APIs with maintained templates
- Configurable CLI built with Commander + Vite for both dev (ts-node) and production builds
- Automated GitHub Actions workflow that can publish fresh
dist/artifacts and sync them to CDN storage
The aggregated dataset starts with the upstream https://models.dev/api.json. During each run we overlay:
- Provider overrides from
manual-templates/ - Live fetchers for operators that are not (yet) covered by models.dev, such as
ppinfra,tokenflux, andgroq - Lightweight snapshots for ecosystems like
ollamaandsiliconflow
After post-processing, the CLI writes the final catalog to dist/. Key outputs include:
dist/all.json– aggregated providers, model counts, and capability rollupsdist/{provider}.json– normalized payload for each individual provider
Consumers can point CDN tooling at the dist/ directory (see GitHub Actions workflow) to publish the latest data for public access.
The dataset keeps legacy capability fields unchanged for backward compatibility. Existing fields such as reasoning, reasoning_effort, and interleaved are preserved as-is.
When we need richer model-level metadata, we add it under extra_capabilities instead of changing old fields. The first supported extension is extra_capabilities.reasoning, which represents a fixed reasoning portrait for the model itself.
extra_capabilities.reasoningdescribes the model portrait, not provider-specific parameter exposure- The same model should have the same reasoning portrait across providers whenever possible
- If a model is not covered by the portrait registry,
extra_capabilities.reasoningis omitted - Legacy fields remain the compatibility layer for downstream consumers that already depend on them
{
"id": "gpt-5",
"reasoning": {
"supported": true,
"default": true
},
"extra_capabilities": {
"reasoning": {
"supported": true,
"default_enabled": true,
"mode": "effort",
"effort": "medium",
"effort_options": ["minimal", "low", "medium", "high"],
"verbosity": "medium",
"verbosity_options": ["low", "medium", "high"],
"visibility": "hidden"
}
}
}supported: whether the model family supports reasoningdefault_enabled: whether reasoning is enabled by default in the model portraitmode: one ofbudget,effort,level,fixed, ormixedbudget: token-budget style reasoning controls such as min/max/default/auto/offeffort: default reasoning efforteffort_options: supported effort levelsverbosity: default reasoning verbosityverbosity_options: supported verbosity levelslevel: default reasoning level for models that use level-based controlslevel_options: supported reasoning levelsinterleaved: whether interleaved reasoning is part of the model portraitsummaries: whether reasoning summaries are part of the model portraitvisibility: one ofhidden,summary,full, ormixedcontinuation: continuation mechanism hints such asthinking_blocksorthought_signaturesnotes: short implementation notes when the model family has important quirks
The initial portrait registry covers these model families:
- OpenAI:
gpt-5,gpt-5.1,o3,o4-mini, and close variants/aliases - Anthropic: Claude 3.7 and Claude 4 reasoning models
- Google: Gemini 2.5 and Gemini 3 reasoning-capable models
- Node.js 18+
- pnpm 8+
git clone https://github.com/ThinkInAIXYZ/PublicProviderConf.git
cd PublicProviderConf
pnpm install
pnpm buildpnpm build runs both Vite targets: the library bundle under build/ and the bundled CLI at build/cli.js.
pnpm run dev # ts-node, defaults to fetch-all
ts-node src/cli.ts fetch-all # explicit command
ts-node src/cli.ts fetch-providers -p ppinfra,tokenfluxpnpm build
pnpm start # equivalent to node build/cli.js fetch-all
node build/cli.js fetch-providers -p ppinfra,tokenflux -o ./distpnpm install -g .
public-provider-conf fetch-all
public-provider-conf fetch-providers -p ppinfra,tokenfluxpublic-provider-conf fetch-all [options]
public-provider-conf fetch-providers -p <providers> [options]
Options:
-p, --providers <providers> Comma-separated provider IDs
-o, --output <dir> Output directory (default: dist)
-h, --help Show command help- models.dev catalog (OpenAI, Anthropic, OpenRouter, Google Gemini, Vercel, GitHub Models, DeepSeek, etc.)
- PPInfra (live API)
- TokenFlux (marketplace API)
- Groq (requires
GROQ_API_KEY) - AIHubMix (live API)
- BurnCloud (manual template)
- Ollama (snapshot templates)
- SiliconFlow (snapshot templates)
Adding a new provider usually involves implementing Provider under src/providers/, adding configuration to src/config/app-config.ts, and optionally contributing templates to manual-templates/.
AIHubMix is fetched live from https://aihubmix.com/api/v1/models so you have a ready-made fallback dataset for models that aren't yet covered by the primary models.dev catalog. Keeping the provider enabled ensures dist/aihubmix.json stays current without manual snapshots.
BurnCloud rides exclusively on manual-templates/burncloud.json, which is regenerated from models.txt snapshots so the CLI can ship the latest catalog without hitting proprietary endpoints. Keep that template up to date (and re-run jq -c to refresh dist/burncloud.json) whenever BurnCloud publishes new models or capability metadata.
src/
├─ cli.ts # CLI entry point (Commander)
├─ commands/ # fetch-all, fetch-providers commands
├─ config/ # default configuration and loaders
├─ fetcher/ # HTTP + file-based fetch orchestrators
├─ models/ # Type definitions and helpers
├─ output/ # Writers, validators, distribution helpers
└─ providers/ # Individual provider integrations
- Default endpoints and flags live in
src/config/app-config.ts - Override the models.dev endpoint with
MODELS_DEV_API_URL - Provide an offline fallback snapshot via
MODELS_DEV_SNAPSHOT_PATH - Set API secrets (e.g.
GROQ_API_KEY) in your environment or CI secrets
pnpm install # install deps
pnpm run dev # ts-node dev loop
pnpm build # Vite builds (library + CLI)
pnpm start # run bundled CLIUse DEBUG=true when you need verbose fetcher logs.
The .github/workflows/fetch-models.yml workflow builds the project, runs the fetch pipeline, validates JSON output, and can sync the dist/ directory to a Qiniu CDN bucket for distribution. Ensure the Qiniu credentials are available as repository secrets when enabling the CDN path.
- Fork the repository
- Create a feature branch
- Run
pnpm buildbefore raising a PR - Submit your PR with relevant test notes and updated templates if applicable
Issues and ideas are always welcome.
Apache-2.0 license