Skip to content

ThinkInAIXYZ/PublicProviderConf

Repository files navigation

PublicProviderConf

PublicProviderConf is a TypeScript CLI and library that aggregates the canonical models.dev catalog together with several custom provider integrations (PPInfra, TokenFlux, Groq, Qiniu-hosted snapshots, and others). The tool normalizes capabilities, fills in missing metadata, and emits standardized JSON payloads that downstream apps can consume without bespoke adapters.

Highlights

  • Unified JSON schema for every provider with consistent capability flags
  • Concurrent fetcher pipeline that merges live APIs with maintained templates
  • Configurable CLI built with Commander + Vite for both dev (ts-node) and production builds
  • Automated GitHub Actions workflow that can publish fresh dist/ artifacts and sync them to CDN storage

Data Sources & Coverage

The aggregated dataset starts with the upstream https://models.dev/api.json. During each run we overlay:

  • Provider overrides from manual-templates/
  • Live fetchers for operators that are not (yet) covered by models.dev, such as ppinfra, tokenflux, and groq
  • Lightweight snapshots for ecosystems like ollama and siliconflow

After post-processing, the CLI writes the final catalog to dist/. Key outputs include:

  • dist/all.json – aggregated providers, model counts, and capability rollups
  • dist/{provider}.json – normalized payload for each individual provider

Consumers can point CDN tooling at the dist/ directory (see GitHub Actions workflow) to publish the latest data for public access.

Extended Capability Fields

The dataset keeps legacy capability fields unchanged for backward compatibility. Existing fields such as reasoning, reasoning_effort, and interleaved are preserved as-is.

When we need richer model-level metadata, we add it under extra_capabilities instead of changing old fields. The first supported extension is extra_capabilities.reasoning, which represents a fixed reasoning portrait for the model itself.

Design Rules

  • extra_capabilities.reasoning describes the model portrait, not provider-specific parameter exposure
  • The same model should have the same reasoning portrait across providers whenever possible
  • If a model is not covered by the portrait registry, extra_capabilities.reasoning is omitted
  • Legacy fields remain the compatibility layer for downstream consumers that already depend on them

Example

{
  "id": "gpt-5",
  "reasoning": {
    "supported": true,
    "default": true
  },
  "extra_capabilities": {
    "reasoning": {
      "supported": true,
      "default_enabled": true,
      "mode": "effort",
      "effort": "medium",
      "effort_options": ["minimal", "low", "medium", "high"],
      "verbosity": "medium",
      "verbosity_options": ["low", "medium", "high"],
      "visibility": "hidden"
    }
  }
}

extra_capabilities.reasoning Fields

  • supported: whether the model family supports reasoning
  • default_enabled: whether reasoning is enabled by default in the model portrait
  • mode: one of budget, effort, level, fixed, or mixed
  • budget: token-budget style reasoning controls such as min/max/default/auto/off
  • effort: default reasoning effort
  • effort_options: supported effort levels
  • verbosity: default reasoning verbosity
  • verbosity_options: supported verbosity levels
  • level: default reasoning level for models that use level-based controls
  • level_options: supported reasoning levels
  • interleaved: whether interleaved reasoning is part of the model portrait
  • summaries: whether reasoning summaries are part of the model portrait
  • visibility: one of hidden, summary, full, or mixed
  • continuation: continuation mechanism hints such as thinking_blocks or thought_signatures
  • notes: short implementation notes when the model family has important quirks

Current Coverage

The initial portrait registry covers these model families:

  • OpenAI: gpt-5, gpt-5.1, o3, o4-mini, and close variants/aliases
  • Anthropic: Claude 3.7 and Claude 4 reasoning models
  • Google: Gemini 2.5 and Gemini 3 reasoning-capable models

Getting Started

Prerequisites

  • Node.js 18+
  • pnpm 8+

Install & Build

git clone https://github.com/ThinkInAIXYZ/PublicProviderConf.git
cd PublicProviderConf
pnpm install
pnpm build

pnpm build runs both Vite targets: the library bundle under build/ and the bundled CLI at build/cli.js.

CLI Usage

Development Flow

pnpm run dev                 # ts-node, defaults to fetch-all
ts-node src/cli.ts fetch-all # explicit command
ts-node src/cli.ts fetch-providers -p ppinfra,tokenflux

Production / Bundled CLI

pnpm build
pnpm start                              # equivalent to node build/cli.js fetch-all
node build/cli.js fetch-providers -p ppinfra,tokenflux -o ./dist

Global Installation (optional)

pnpm install -g .
public-provider-conf fetch-all
public-provider-conf fetch-providers -p ppinfra,tokenflux

Command Reference

public-provider-conf fetch-all [options]
public-provider-conf fetch-providers -p <providers> [options]

Options:
  -p, --providers <providers>  Comma-separated provider IDs
  -o, --output <dir>           Output directory (default: dist)
  -h, --help                   Show command help

Supported Providers

  • models.dev catalog (OpenAI, Anthropic, OpenRouter, Google Gemini, Vercel, GitHub Models, DeepSeek, etc.)
  • PPInfra (live API)
  • TokenFlux (marketplace API)
  • Groq (requires GROQ_API_KEY)
  • AIHubMix (live API)
  • BurnCloud (manual template)
  • Ollama (snapshot templates)
  • SiliconFlow (snapshot templates)

Adding a new provider usually involves implementing Provider under src/providers/, adding configuration to src/config/app-config.ts, and optionally contributing templates to manual-templates/. AIHubMix is fetched live from https://aihubmix.com/api/v1/models so you have a ready-made fallback dataset for models that aren't yet covered by the primary models.dev catalog. Keeping the provider enabled ensures dist/aihubmix.json stays current without manual snapshots. BurnCloud rides exclusively on manual-templates/burncloud.json, which is regenerated from models.txt snapshots so the CLI can ship the latest catalog without hitting proprietary endpoints. Keep that template up to date (and re-run jq -c to refresh dist/burncloud.json) whenever BurnCloud publishes new models or capability metadata.

Project Structure

src/
├─ cli.ts                 # CLI entry point (Commander)
├─ commands/              # fetch-all, fetch-providers commands
├─ config/                # default configuration and loaders
├─ fetcher/               # HTTP + file-based fetch orchestrators
├─ models/                # Type definitions and helpers
├─ output/                # Writers, validators, distribution helpers
└─ providers/             # Individual provider integrations

Configuration & Environment

  • Default endpoints and flags live in src/config/app-config.ts
  • Override the models.dev endpoint with MODELS_DEV_API_URL
  • Provide an offline fallback snapshot via MODELS_DEV_SNAPSHOT_PATH
  • Set API secrets (e.g. GROQ_API_KEY) in your environment or CI secrets

Development Tips

pnpm install      # install deps
pnpm run dev      # ts-node dev loop
pnpm build        # Vite builds (library + CLI)
pnpm start        # run bundled CLI

Use DEBUG=true when you need verbose fetcher logs.

CI & Distribution

The .github/workflows/fetch-models.yml workflow builds the project, runs the fetch pipeline, validates JSON output, and can sync the dist/ directory to a Qiniu CDN bucket for distribution. Ensure the Qiniu credentials are available as repository secrets when enabling the CDN path.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Run pnpm build before raising a PR
  4. Submit your PR with relevant test notes and updated templates if applicable

Issues and ideas are always welcome.

License

Apache-2.0 license

About

Public Provider Config for DeepChat

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors