OpenAI-compatible relay for Amp with 2 modes: smart, rush.
- Go implementation, no UI
.envconfig forAMP_API_KEY- OpenAI-compatible endpoints:
GET /v1/modelsPOST /v1/chat/completions
- Mode switching by
modeormodelsmart/amp-smartrush/amp-rush
cd /opt/LLM/amp2api
cp .env.example .env
# edit .env and set AMP_API_KEY
go run .Server default: http://127.0.0.1:18086
curl http://127.0.0.1:18086/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "amp-smart",
"messages": [
{"role": "user", "content": "Write a Go hello world"}
]
}'Use explicit mode:
curl http://127.0.0.1:18086/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"mode": "rush",
"messages": [
{"role": "user", "content": "Summarize this code change in 3 bullets"}
]
}'stream: trueis supported as pseudo-stream SSE (single chunk +[DONE]).- Amp upstream may return
402if the key account has no paid credits for SDK/execute-style usage. deepmode is currently disabled in this relay.- This project does not expose Amp internal tool details in API output.
- Set
DEBUG=true(orDEBUG=Ture) to print verbose logs: client request params, upstream request/response, and API response.