BRIDGE is a local-first LLM operations workbench for experimenting with:
- provider management
- chat sessions
- prompt templates
- datasets + imports
- workbench runs
- workflows
- automations
- audit/history
It is currently pre-alpha and best suited for a single user or a small trusted group on local/private servers.
BRIDGE is usable, but still evolving quickly.
Current highlights:
- OpenAI-compatible provider support
- generic OpenAPI provider support
- saved provider tokens with DB-backed storage
- model list discovery for supported providers
- chat sessions with streaming support for OpenAI-compatible providers
- context-window / max-output model profile settings
- dataset imports and previews
- workbench runs with staged progress + run status
- workflows with JSON definitions and step builder helpers
- automations for workflow, raw prompt, and template-backed prompt execution
- audit trail and run history
Current caveats:
- pre-alpha schema churn is still being cleaned up
- some providers/features are more polished than others
- security posture is aimed at trusted local/private deployments first
- token budgeting is heuristic, not tokenizer-exact
Local UI screenshots live in:
docs/screenshots/
The repo now includes a lightweight screenshot generator for docs/README refreshes. It uses Playwright against the built frontend with mocked API responses, so it does not need a live backend or real provider keys.
Generate/update screenshots with:
cd frontend
npm install
npx playwright install chromium
npm run build
npm run screenshotsCurrent generated shots:
- dashboard
- chat
- providers
- workbench
- workflows
- automations
Recommended maintainer workflow:
- refresh screenshots before releases or notable UI changes
- keep the set small and representative
- treat them as docs assets, not pixel-perfect regression tests
Right now BRIDGE should be treated as:
- local-first
- single-user or trusted small-team
- not hardened for broad internet exposure by default
If you put it on a public server, you should assume you need to add your own proper auth, network restrictions, and hardening.
backend/— FastAPI + SQLAlchemy backendfrontend/— Vite/React frontenddocs/— project notes, checklists, and planning docsbootstrap.sh— local setup helperrun-dev.sh— local dev launchermigrate.sh— Alembic upgrade helper
cd ~/code/foss-projects/BRIDGE
./bootstrap.sh./migrate.sh./run-dev.shrun-dev.sh applies Alembic migrations before starting the backend.
cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
alembic upgrade head
uvicorn app.main:app --reload --host 0.0.0.0 --port 8080cd frontend
npm install
npm run devBRIDGE currently supports two database paths:
- startup bootstrap still uses
create_all()to keep fresh local installs simple - seed data is then applied on startup
- use Alembic migrations to move the schema forward
- recommended command:
cd ~/code/foss-projects/BRIDGE
./migrate.shFor existing databases, treat Alembic as the source of truth for upgrades.
The remaining startup create_all() / SQLite compatibility behavior is temporary pre-alpha convenience, not the long-term migration strategy.
BRIDGE supports provider secrets in two ways:
- environment variable fallback
- DB-backed saved secrets from the UI
Notes:
- saved provider secrets are masked in settings responses
- local runtime artifacts, databases,
.env*, and logs should not be committed - review
.gitignorebefore committing local changes from an active dev environment
Configure and test providers, discover models, and attach per-model metadata like:
- context window
- max output tokens
Chat replays saved session history each turn. For models with configured context limits, BRIDGE estimates token usage, trims older history when needed, and surfaces context-budget info in the UI.
Run datasets through templates/providers, inspect previews, and follow staged progress.
Create reusable step-based definitions (currently prompt-template + provider steps), run them manually, and inspect run status.
Schedule:
- workflows
- raw prompts
- template-backed prompts
Supports interval, one-time, and daily schedules.
The current release-prep checklist lives here:
docs/pre-alpha-checklist.md
- security review is in progress
- some debug/audit surfaces are still being tightened
- migrations/upgrade flow is improving but not fully polished yet
- not all provider plugins support the same level of streaming/model introspection
- long-chat trimming is heuristic and does not yet summarize dropped history
If you’re part of an early test group, the most useful bug reports include:
- what page/flow you were using
- provider type
- whether this was a fresh install or upgraded DB
- the exact error shown in the UI
- backend logs / traceback
- whether
./migrate.shhad been run
Before committing, it’s worth checking:
git status
./migrate.shAnd scanning for obvious secrets or local artifacts:
grep -RInE "(OPENAI_API_KEY|ANTHROPIC_API_KEY|sk-|Bearer )" . --exclude-dir=node_modules --exclude-dir=.git --exclude-dir=.venvIf you’re reading this from the repo, assume the codebase is moving fast. Prefer small, reviewable commits and treat local data/secrets as hostile to commits by default.