Skip to content

Latest commit

 

History

History
281 lines (211 loc) · 8.58 KB

File metadata and controls

281 lines (211 loc) · 8.58 KB

Getting Started with Continuum Ghost

This guide walks you through the complete authentication flow: enrolling a client, requesting a challenge, generating a proof, and receiving an access token.

Prerequisites

  • Node.js >= 22
  • Docker and Docker Compose (for the server)
  • curl or any HTTP client

1. Start the Server

git clone https://github.com/gthstepsecurity/2fapi-server.git
cd 2fapi-server
npm install

Start in development mode (in-memory storage, no external dependencies):

npm run dev

You should see:

[2FApi] Listening on http://0.0.0.0:3000
[2FApi] Health: http://0.0.0.0:3000/health

Verify the server is running:

curl http://localhost:3000/health
{"status":"ok","version":"1.0"}

Warning: Development mode uses in-memory storage and stub cryptographic verifiers. See Going to Production before deploying.

2. Enroll a Client

Register a new client by sending a Pedersen commitment and a proof of possession.

In development mode, any valid base64-encoded bytes are accepted. In production, these must be real cryptographic values generated by the client SDK.

# Generate random bytes for the example (32 bytes each)
COMMITMENT=$(openssl rand -base64 32)
PROOF_S=$(openssl rand -base64 32)
PROOF_R=$(openssl rand -base64 32)
PROOF_A=$(openssl rand -base64 32)

# Concatenate the three proof components (announcement + responseS + responseR = 96 bytes)
PROOF=$(echo -n "${PROOF_A}${PROOF_S}${PROOF_R}" | base64 -w0 2>/dev/null || echo -n "${PROOF_A}${PROOF_S}${PROOF_R}" | base64)

curl -s http://localhost:3000/v1/clients \
  -H "Content-Type: application/json" \
  -d "{
    \"clientIdentifier\": \"my-service\",
    \"commitment\": \"${COMMITMENT}\",
    \"proofOfPossession\": \"${PROOF}\"
  }" | jq .

Response:

{
  "referenceId": "a1b2c3d4...",
  "clientIdentifier": "my-service"
}

3. Request a Challenge

Before authenticating, the client must request a fresh nonce from the server.

CREDENTIAL=$(openssl rand -base64 32)
CHANNEL_BINDING=$(openssl rand -base64 32)

curl -s http://localhost:3000/v1/challenges \
  -H "Content-Type: application/json" \
  -d "{
    \"clientIdentifier\": \"my-service\",
    \"credential\": \"${CREDENTIAL}\",
    \"channelBinding\": \"${CHANNEL_BINDING}\"
  }" | jq .

Response:

{
  "challengeId": "ch-abc123...",
  "nonce": "base64...",
  "channelBinding": "base64...",
  "expiresAt": "2026-03-27T14:00:00.000Z",
  "protocolVersion": "1.0"
}

Save the challengeId — you will need it in the next step.

4. Verify a Proof and Get a Token

Submit a zero-knowledge proof bound to the challenge nonce. On success, the server issues an access token.

CHALLENGE_ID="<challengeId from step 3>"
PROOF=$(openssl rand -base64 32)

curl -s http://localhost:3000/v1/verify \
  -H "Content-Type: application/json" \
  -d "{
    \"clientIdentifier\": \"my-service\",
    \"challengeId\": \"${CHALLENGE_ID}\",
    \"proof\": \"${PROOF}\",
    \"channelBinding\": \"${CHANNEL_BINDING}\",
    \"domainSeparationTag\": \"2FApi-Sigma-v1\"
  }" | jq .

Response:

{
  "accessToken": "eyJhbG...",
  "tokenType": "Bearer",
  "expiresAt": "2026-03-27T14:15:00.000Z",
  "expiresIn": 900
}

5. Access a Protected Resource

Use the access token to call protected endpoints:

ACCESS_TOKEN="<accessToken from step 4>"

curl -s http://localhost:3000/v1/resources/my-data \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" | jq .

Response:

{
  "resourceId": "my-data",
  "clientIdentifier": "my-service",
  "audience": "default",
  "data": {}
}

Protocol Flow

    Client                                Server
      │                                     │
      │──── POST /v1/clients ──────────────▶│  1. Enrollment
      │     {commitment, proofOfPossession} │     Store commitment
      │◀─── 201 {referenceId} ─────────────│
      │                                     │
      │──── POST /v1/challenges ───────────▶│  2. Challenge
      │     {clientIdentifier, credential}  │     Generate nonce
      │◀─── 200 {challengeId, nonce} ──────│
      │                                     │
      │     ┌─────────────────────┐         │
      │     │ Generate Sigma proof│         │  3. Proof generation
      │     │ bound to nonce      │         │     (client-side only)
      │     └─────────────────────┘         │
      │                                     │
      │──── POST /v1/verify ───────────────▶│  4. Verification
      │     {challengeId, proof}            │     Verify proof against
      │◀─── 200 {accessToken} ─────────────│     stored commitment
      │                                     │
      │──── GET /v1/resources/X ───────────▶│  5. Access
      │     Authorization: Bearer <token>   │     Validate token
      │◀─── 200 {data} ───────────────────│
      │                                     │

Key property: The server never sees the client's secret at any point. It stores only the commitment (a public value) and verifies proofs against it.

6. Going to Production

The development server uses stub verifiers and in-memory storage. A production deployment requires:

Infrastructure

Set up PostgreSQL and Redis using the included Docker Compose:

cp .env.example .env

Edit .env and set strong, unique passwords:

POSTGRES_PASSWORD=<generate with: openssl rand -hex 32>
REDIS_PASSWORD=<generate with: openssl rand -hex 32>
docker compose up -d
npm run migrate

Environment

Start the server in production mode:

NODE_ENV=production node --enable-source-maps dist/start.js

Security Checklist

  • TLS termination in front of the server (nginx, Caddy, or cloud load balancer). All client-server communication must be encrypted.
  • Strong passwords for PostgreSQL and Redis (minimum 32 random bytes)
  • Network isolation: PostgreSQL and Redis must not be exposed to the internet
  • Firewall: Only the application port should be reachable from outside
  • Real cryptographic verifiers: Production mode uses napi-rs bindings to curve25519-dalek for constant-time Ristretto255 operations
  • Log rotation and monitoring: Production logs should be shipped to a centralized system
  • Backup strategy for PostgreSQL

Client-Side Integration

In production, the client must generate real Pedersen commitments and Sigma proofs using the @2fapi/client-sdk package with a cryptographic backend (WASM for browsers, napi-rs for Node.js).

npm install @2fapi/client-sdk @2fapi/protocol-spec

The SDK provides domain models, ports, and TypeScript types for building the client-side authentication flow.

API Reference

Core Endpoints

Method Path Description
POST /v1/clients Enroll a new client (register commitment)
POST /v1/challenges Request authentication challenge (get nonce)
POST /v1/verify Submit ZK proof, receive access token
GET /v1/resources/{id} Access protected resource with Bearer token

Lifecycle Endpoints

Method Path Description
PUT /v1/clients/{id}/commitment Rotate commitment (requires auth)
DELETE /v1/clients/{id} Revoke client (admin only)
POST /v1/clients/{id}/recover Account recovery via BIP-39 phrase
POST /v1/clients/{id}/reactivate Reactivate via admin (admin only)

Error Format

All errors follow RFC 7807 Problem Details:

{
  "type": "urn:2fapi:error:validation-failed",
  "title": "Bad Request",
  "status": 400,
  "detail": "clientIdentifier is required",
  "instance": "req-abc123"
}

Further Reading