Skip to content

sbdevman/Thunderbolt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

⚑ Thunderbolt

Enterprise-Grade Distributed Load & Performance Testing Platform

Quick Start β€’ Architecture β€’ Features β€’ API β€’ Deployment β€’ Contributing


Thunderbolt is a cloud-native, distributed load testing platform built with .NET 10, Akka.NET, and event sourcing. It orchestrates thousands of virtual users across a cluster of worker nodes to simulate realistic traffic patterns against HTTP, gRPC, WebSocket, MQTT, AMQP, and raw TCP/UDP endpoints.

✨ Features

  • Distributed Cluster Engine β€” Akka.NET cluster with coordinator/worker topology, automatic shard rebalancing, and split-brain resolution
  • Multi-Protocol Support β€” HTTP, gRPC, WebSocket, MQTT, AMQP, Raw TCP/UDP with a pluggable protocol handler architecture
  • JSON Scenario Definitions β€” Declarative test scenarios with steps, extractors, assertions, data feeders, and multiple load profiles (ramp-up, constant, steps, spike, custom)
  • Real-Time Metrics β€” Live streaming via SignalR with HdrHistogram percentile tracking (P50/P75/P90/P95/P99), RPS, error rates, and throughput
  • Event-Sourced Persistence β€” Full test lifecycle stored via Marten/PostgreSQL event store with projections for read models
  • Time-Series Metrics Storage β€” InfluxDB for high-resolution metric data with configurable batching and gzip compression
  • Event Streaming β€” Kafka-based event bus for inter-service communication and external integrations
  • AI-Powered Agents β€” Microsoft Semantic Kernel agents for scenario generation, metrics analysis, SLO advisory, and test comparison
  • Blazor Dashboard β€” Server-side Blazor UI with MudBlazor components for test management, real-time monitoring, and AI assistant
  • Multi-Tenancy β€” Tenant isolation via header-based resolution with per-tenant event streams
  • Plugin System β€” Hot-loadable protocol plugins via assembly scanning
  • Kubernetes-Native β€” Helm charts, Kubernetes API discovery, and Docker images for all services
  • Observability β€” OpenTelemetry tracing + Prometheus metrics export, Serilog structured logging with Seq sink

πŸ— Architecture

                           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                           β”‚    Dashboard      β”‚
                           β”‚  (Blazor/MudBlazor)β”‚
                           β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                    β”‚ SignalR + HTTP
                           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                           β”‚     API Server    β”‚
                           β”‚  (ASP.NET Minimal)β”‚
                           β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                    β”‚ Akka.NET Cluster
                     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                     β”‚              β”‚              β”‚
              β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”
              β”‚ Coordinator β”‚  β”‚ Worker-1 β”‚  β”‚ Worker-N β”‚
              β”‚  (Singleton) β”‚  β”‚ (Sharded)β”‚  β”‚ (Sharded)β”‚
              β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
                     β”‚             β”‚              β”‚
                     β”‚        β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”
                     β”‚        β”‚ Virtual  β”‚    β”‚ Virtual  β”‚
                     β”‚        β”‚  Users   β”‚    β”‚  Users   β”‚
                     β”‚        β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
                     β”‚             β”‚              β”‚
              β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”
              β”‚              Target System(s)              β”‚
              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚ PostgreSQL β”‚    β”‚  InfluxDB  β”‚    β”‚   Kafka    β”‚
    β”‚(Event Store)β”‚    β”‚ (Metrics)  β”‚    β”‚(Streaming) β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Node Roles

Role Description
Coordinator Cluster singleton that orchestrates test lifecycle, distributes VUs across workers, handles auto-stop timers, and manages worker failover
Worker Sharded actor region that spawns and manages VirtualUserActor instances executing scenario steps via protocol handlers
API ASP.NET Minimal API node joined to the cluster, exposes REST endpoints, SignalR hub, and Prometheus scraping
Dashboard Blazor Server app consuming the API with real-time SignalR metrics streaming

πŸ“¦ Project Structure

src/
β”œβ”€β”€ Thunderbolt.Core/              # Domain models, aggregates, events, messages, protocols
β”œβ”€β”€ Thunderbolt.Engine/            # Akka.NET actors (Coordinator, Worker, VirtualUser, MetricsAggregator)
β”œβ”€β”€ Thunderbolt.Scenarios/         # Scenario parsing, load profiles, assertions, data feeders, extractors
β”œβ”€β”€ Thunderbolt.Protocols/         # Protocol handler implementations
β”‚   β”œβ”€β”€ Thunderbolt.Protocols.Abstractions/
β”‚   β”œβ”€β”€ Thunderbolt.Protocols.Http/
β”‚   β”œβ”€β”€ Thunderbolt.Protocols.Grpc/
β”‚   β”œβ”€β”€ Thunderbolt.Protocols.WebSocket/
β”‚   β”œβ”€β”€ Thunderbolt.Protocols.Mqtt/
β”‚   β”œβ”€β”€ Thunderbolt.Protocols.Amqp/
β”‚   └── Thunderbolt.Protocols.RawSocket/
β”œβ”€β”€ Thunderbolt.Persistence/       # Marten event store, projections, read models
β”œβ”€β”€ Thunderbolt.Metrics/           # InfluxDB writer, query service, HdrHistogram
β”œβ”€β”€ Thunderbolt.Streaming/         # Kafka producer/consumer, event subscriptions
β”œβ”€β”€ Thunderbolt.Plugins/           # Plugin host, protocol registry, assembly loading
β”œβ”€β”€ Thunderbolt.Agents/            # AI agents (Semantic Kernel) β€” scenario generator, metrics analyst, SLO advisor
β”œβ”€β”€ Thunderbolt.Api/               # REST API, SignalR hub, middleware, authentication
β”œβ”€β”€ Thunderbolt.Coordinator/       # Coordinator node host
β”œβ”€β”€ Thunderbolt.Worker/            # Worker node host
└── Thunderbolt.Dashboard/         # Blazor Server dashboard

tests/                             # xUnit tests with FluentAssertions, NSubstitute, Testcontainers
plugins/                           # Example protocol plugin
scenarios/                         # Sample scenario definitions (JSON)
deploy/
β”œβ”€β”€ docker/                        # Dockerfiles for each service
β”œβ”€β”€ helm/thunderbolt/              # Helm chart for Kubernetes deployment
└── k8s/                           # Raw Kubernetes manifests

πŸš€ Quick Start

Prerequisites

Option 1: Docker Compose (Recommended)

Build and publish the applications:

# Publish all services
dotnet publish src/Thunderbolt.Api -c Release -o out/api
dotnet publish src/Thunderbolt.Coordinator -c Release -o out/coordinator
dotnet publish src/Thunderbolt.Worker -c Release -o out/worker
dotnet publish src/Thunderbolt.Dashboard -c Release -o out/dashboard

Start the full stack:

docker compose up --build -d

This starts:

Service URL
Dashboard http://localhost:5100
API http://localhost:5000
InfluxDB http://localhost:8086
PostgreSQL localhost:5432
Kafka localhost:9092

The default stack includes 1 coordinator, 2 workers, 1 API, and 1 dashboard node.

Option 2: Local Development

Start infrastructure services only:

docker compose up postgres influxdb kafka -d

Then run each service in separate terminals:

# Terminal 1 β€” Coordinator
dotnet run --project src/Thunderbolt.Coordinator

# Terminal 2 β€” Worker
dotnet run --project src/Thunderbolt.Worker

# Terminal 3 β€” API
dotnet run --project src/Thunderbolt.Api

# Terminal 4 β€” Dashboard
dotnet run --project src/Thunderbolt.Dashboard

πŸ“‹ Configuration

Configuration is managed via appsettings.json and environment variables. Key sections:

Cluster Configuration

{
  "Thunderbolt": {
    "Cluster": {
      "Hostname": "0.0.0.0",
      "Port": 8558,
      "Role": "coordinator|worker|api",
      "SeedNodes": ["akka.tcp://thunderbolt@coordinator:8558"],
      "UseKubernetesDiscovery": false,
      "KubernetesLabelSelector": "app=thunderbolt",
      "SplitBrainStrategy": "keep-majority",
      "NumberOfShards": 100,
      "PersistenceConnectionString": "Host=postgres;Database=thunderbolt;..."
    }
  }
}

Metrics (InfluxDB)

{
  "Thunderbolt": {
    "InfluxDb": {
      "Url": "http://localhost:8086",
      "Token": "your-token",
      "Organization": "thunderbolt",
      "Bucket": "metrics",
      "BatchSize": 5000,
      "FlushIntervalMs": 1000,
      "EnableGzip": true
    }
  }
}

Streaming (Kafka)

{
  "Thunderbolt": {
    "Kafka": {
      "BootstrapServers": "localhost:9092",
      "GroupId": "thunderbolt-api",
      "TestEventsTopic": "thunderbolt.test-events",
      "MetricsTopic": "thunderbolt.metrics",
      "CommandsTopic": "thunderbolt.commands"
    }
  }
}

AI Agents (Semantic Kernel)

{
  "Thunderbolt": {
    "Ai": {
      "Provider": "AzureOpenAI",
      "ModelId": "gpt-4o",
      "Endpoint": "https://your-endpoint.openai.azure.com",
      "ApiKey": "your-api-key",
      "MaxTokens": 4096,
      "Temperature": 0.3,
      "Agents": {
        "ScenarioGenerator": true,
        "MetricsAnalyst": true,
        "SloAdvisor": true,
        "TestPlanner": true
      }
    }
  }
}

⚠️ Security: Never commit API keys or secrets. Use environment variables or a secret manager in production.

Environment Variable Overrides

All configuration keys can be set via environment variables using the __ (double underscore) separator:

Thunderbolt__Cluster__Role=worker
Thunderbolt__InfluxDb__Token=your-token
Thunderbolt__Kafka__BootstrapServers=kafka:29092
ConnectionStrings__PostgreSQL="Host=postgres;Database=thunderbolt;..."

πŸ“– API Reference

All endpoints are prefixed with /api/v1 and require authentication (JWT Bearer in production, auto-authenticated in Development mode).

Load Tests

Method Endpoint Description
POST /api/v1/tests Create and start a new load test
GET /api/v1/tests List all tests (paginated)
GET /api/v1/tests/{testId} Get test details
GET /api/v1/tests/{testId}/status Get live test status with real-time metrics
POST /api/v1/tests/{testId}/stop Gracefully stop a running test
DELETE /api/v1/tests/{testId} Cancel a test

Scenarios

Method Endpoint Description
POST /api/v1/scenarios Create a new scenario
GET /api/v1/scenarios List all scenarios
GET /api/v1/scenarios/{id} Get scenario details
PUT /api/v1/scenarios/{id} Update a scenario
DELETE /api/v1/scenarios/{id} Delete a scenario

Metrics

Method Endpoint Description
GET /api/v1/metrics/{testId} Query historical metrics from InfluxDB

AI Agents

Method Endpoint Description
POST /api/v1/ai/generate-scenario Generate a scenario from natural language
POST /api/v1/ai/analyze-metrics AI analysis of test metrics
POST /api/v1/ai/slo-advisor Get SLO recommendations
POST /api/v1/ai/compare-tests Compare two test runs

Real-Time

Protocol Endpoint Description
SignalR /hubs/loadtest Live metrics streaming (VU count, RPS, latency percentiles, errors)
Prometheus /metrics Prometheus scraping endpoint

Multi-Tenancy

Include the tenant header in all API requests:

X-Tenant-Id: your-tenant-id

πŸ“ Scenario Definition

Scenarios are defined in JSON and support complex user journeys with variable extraction, data feeding, and assertions.

Example: E-Commerce Load Test

{
  "name": "E-Commerce User Journey",
  "description": "Simulates browsing, searching, and adding to cart",
  "protocol": "http",
  "loadProfile": {
    "type": "steps",
    "stages": [
      { "users": 50,  "durationSeconds": 60  },
      { "users": 150, "durationSeconds": 120 },
      { "users": 300, "durationSeconds": 180 },
      { "users": 50,  "durationSeconds": 60  }
    ]
  },
  "steps": [
    {
      "name": "Homepage",
      "type": "http_request",
      "method": "GET",
      "url": "https://example.com",
      "expectedStatusCodes": [200],
      "headers": { "Accept": "text/html" },
      "thinkTimeMs": 2000,
      "timeoutSeconds": 15,
      "extractors": [
        {
          "type": "regex",
          "name": "csrfToken",
          "pattern": "<meta name=\"csrf-token\" content=\"([^\"]+)\""
        },
        {
          "type": "cookie",
          "name": "sessionId",
          "pattern": "JSESSIONID"
        }
      ]
    },
    {
      "name": "Add to Cart",
      "type": "http_request",
      "method": "POST",
      "url": "https://example.com/api/cart/add",
      "headers": {
        "X-CSRF-Token": "{{csrfToken}}",
        "Cookie": "JSESSIONID={{sessionId}}"
      },
      "body": "{\"productId\":\"{{productId}}\",\"quantity\":1}",
      "contentType": "application/json",
      "thinkTimeMs": 1500,
      "extractors": [
        { "type": "json_path", "name": "cartId", "pattern": "cartId" }
      ]
    }
  ],
  "assertions": [
    { "type": "response_time_percentile", "percentile": 95, "maxMs": 3000 },
    { "type": "error_rate", "maxErrorRate": 0.01 },
    { "type": "throughput", "minRps": 50 }
  ],
  "dataFeeder": {
    "type": "json",
    "strategy": "random",
    "data": [
      { "productId": "SKU-001" },
      { "productId": "SKU-002" },
      { "productId": "SKU-003" }
    ]
  }
}

Load Profile Types

Type Description
rampUp Linear ramp from 0 to N users over a duration
constant Fixed number of users for a duration
steps Staged increases/decreases in user count
spike Sudden burst of users to test resilience
custom User-defined load curve

Extractor Types

Type Description
regex Extract values from response body via regex capture groups
json_path Extract values from JSON response body
header Extract values from response headers
cookie Extract values from response cookies

Assertion Types

Type Description
response_time_percentile P50/P75/P90/P95/P99 latency thresholds
error_rate Maximum allowed error rate (0.0 – 1.0)
throughput Minimum requests per second

πŸ§ͺ Testing

# Run all tests
dotnet test

# Run specific test project
dotnet test tests/Thunderbolt.Core.Tests

# Run with coverage
dotnet test --collect:"XPlat Code Coverage"

The test suite includes:

  • Unit tests β€” Domain models, aggregates, scenarios, protocols, metrics
  • Integration tests β€” PostgreSQL and Kafka via Testcontainers
  • Libraries β€” xUnit, FluentAssertions, NSubstitute

🐳 Docker

Individual Dockerfiles are provided for each service:

# Build individual images
docker build -f docker/Dockerfile.api -t thunderbolt-api .
docker build -f docker/Dockerfile.coordinator -t thunderbolt-coordinator .
docker build -f docker/Dockerfile.worker -t thunderbolt-worker .
docker build -f docker/Dockerfile.dashboard -t thunderbolt-dashboard .

☸️ Deployment

Kubernetes (Helm)

helm install thunderbolt deploy/helm/thunderbolt \
  --namespace thunderbolt \
  --create-namespace \
  --set coordinator.replicas=1 \
  --set worker.replicas=3 \
  --set api.replicas=2

The Helm chart includes:

  • Coordinator Deployment (singleton pattern)
  • Worker StatefulSet (scalable)
  • API Deployment with Service
  • Dashboard Deployment with Service
  • ConfigMap for shared configuration
  • RBAC for Kubernetes API discovery

Scaling Workers

Workers auto-join the cluster via seed node discovery. To scale:

# Kubernetes
kubectl scale deployment thunderbolt-worker --replicas=5 -n thunderbolt

# Docker Compose
docker compose up --scale worker-1=3 -d

The coordinator automatically redistributes virtual users when workers join or leave, including failover when a worker becomes unreachable.

πŸ”§ Technology Stack

Component Technology
Runtime .NET 10 / C# (latest)
Actor System Akka.NET 1.5 (Cluster, Sharding, Persistence, DistributedPubSub)
Event Store Marten 7.x / PostgreSQL 16
Time-Series DB InfluxDB 2.7
Message Broker Apache Kafka (Confluent)
Dashboard Blazor Server / MudBlazor 8.x
Real-Time ASP.NET SignalR
AI Microsoft Semantic Kernel 1.74 / Azure OpenAI
Histograms HdrHistogram
Auth JWT Bearer / OpenID Connect
Telemetry OpenTelemetry + Prometheus
Logging Serilog (Console + Seq)
Serialization System.Text.Json / YamlDotNet
Testing xUnit, FluentAssertions, NSubstitute, Testcontainers
Containerization Docker, Helm, Kubernetes

πŸ“Š Metrics & Observability

Thunderbolt captures detailed metrics for every request:

Metric Description
DurationMs Total request duration
TtfbMs Time to first byte
ConnectMs TCP connection time
TlsMs TLS handshake time
BytesSent Request payload size
BytesReceived Response payload size
StatusCode HTTP/protocol status code
IsError Error flag (status code + pattern matching)

Aggregated metrics are computed in real-time using HdrHistogram:

  • Percentiles: P50, P75, P90, P95, P99
  • Throughput: Requests/second
  • Error Rate: Errors / Total requests
  • Min/Max/Avg: Latency statistics

πŸ€– AI Agents

Thunderbolt includes four AI-powered agents built with Microsoft Semantic Kernel:

Agent Description
Scenario Generator Generates complete JSON scenario definitions from natural language descriptions
Metrics Analyst Analyzes test results and provides performance insights
SLO Advisor Recommends Service Level Objectives based on test data
Test Comparison Compares two test runs and highlights regressions

Configure AI by setting the Thunderbolt:Ai section in appsettings.json or via environment variables.

πŸ”Œ Plugin System

Create custom protocol handlers by implementing IProtocolHandler and packaging as a .NET class library:

public class MyProtocolHandler : IProtocolHandler
{
    public string ProtocolName => "my-protocol";

    public Task<ResponseResult> ExecuteAsync(RequestContext context)
    {
        // Your protocol implementation
    }
}

Place the compiled DLL in the plugins/ directory β€” Thunderbolt auto-discovers and registers it at startup.

πŸ“„ License

This project is licensed under the MIT License.

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Built with ⚑ by the Thunderbolt Contributors

About

Distributed High-Available, High-Scalable Load & Performance Testing Toolkit

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

 
 
 

Contributors