Enterprise-Grade Distributed Load & Performance Testing Platform
Quick Start β’ Architecture β’ Features β’ API β’ Deployment β’ Contributing
Thunderbolt is a cloud-native, distributed load testing platform built with .NET 10, Akka.NET, and event sourcing. It orchestrates thousands of virtual users across a cluster of worker nodes to simulate realistic traffic patterns against HTTP, gRPC, WebSocket, MQTT, AMQP, and raw TCP/UDP endpoints.
- Distributed Cluster Engine β Akka.NET cluster with coordinator/worker topology, automatic shard rebalancing, and split-brain resolution
- Multi-Protocol Support β HTTP, gRPC, WebSocket, MQTT, AMQP, Raw TCP/UDP with a pluggable protocol handler architecture
- JSON Scenario Definitions β Declarative test scenarios with steps, extractors, assertions, data feeders, and multiple load profiles (ramp-up, constant, steps, spike, custom)
- Real-Time Metrics β Live streaming via SignalR with HdrHistogram percentile tracking (P50/P75/P90/P95/P99), RPS, error rates, and throughput
- Event-Sourced Persistence β Full test lifecycle stored via Marten/PostgreSQL event store with projections for read models
- Time-Series Metrics Storage β InfluxDB for high-resolution metric data with configurable batching and gzip compression
- Event Streaming β Kafka-based event bus for inter-service communication and external integrations
- AI-Powered Agents β Microsoft Semantic Kernel agents for scenario generation, metrics analysis, SLO advisory, and test comparison
- Blazor Dashboard β Server-side Blazor UI with MudBlazor components for test management, real-time monitoring, and AI assistant
- Multi-Tenancy β Tenant isolation via header-based resolution with per-tenant event streams
- Plugin System β Hot-loadable protocol plugins via assembly scanning
- Kubernetes-Native β Helm charts, Kubernetes API discovery, and Docker images for all services
- Observability β OpenTelemetry tracing + Prometheus metrics export, Serilog structured logging with Seq sink
ββββββββββββββββββββ
β Dashboard β
β (Blazor/MudBlazor)β
ββββββββββ¬ββββββββββ
β SignalR + HTTP
ββββββββββΌββββββββββ
β API Server β
β (ASP.NET Minimal)β
ββββββββββ¬ββββββββββ
β Akka.NET Cluster
ββββββββββββββββΌβββββββββββββββ
β β β
ββββββββΌβββββββ βββββΌβββββββ βββββΌβββββββ
β Coordinator β β Worker-1 β β Worker-N β
β (Singleton) β β (Sharded)β β (Sharded)β
ββββββββ¬βββββββ βββββ¬βββββββ βββββ¬βββββββ
β β β
β ββββββΌβββββ ββββββΌβββββ
β β Virtual β β Virtual β
β β Users β β Users β
β ββββββ¬βββββ ββββββ¬βββββ
β β β
ββββββββΌβββββββββββββββΌβββββββββββββββΌβββββββ
β Target System(s) β
βββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββ ββββββββββββββ ββββββββββββββ
β PostgreSQL β β InfluxDB β β Kafka β
β(Event Store)β β (Metrics) β β(Streaming) β
ββββββββββββββ ββββββββββββββ ββββββββββββββ
| Role | Description |
|---|---|
| Coordinator | Cluster singleton that orchestrates test lifecycle, distributes VUs across workers, handles auto-stop timers, and manages worker failover |
| Worker | Sharded actor region that spawns and manages VirtualUserActor instances executing scenario steps via protocol handlers |
| API | ASP.NET Minimal API node joined to the cluster, exposes REST endpoints, SignalR hub, and Prometheus scraping |
| Dashboard | Blazor Server app consuming the API with real-time SignalR metrics streaming |
src/
βββ Thunderbolt.Core/ # Domain models, aggregates, events, messages, protocols
βββ Thunderbolt.Engine/ # Akka.NET actors (Coordinator, Worker, VirtualUser, MetricsAggregator)
βββ Thunderbolt.Scenarios/ # Scenario parsing, load profiles, assertions, data feeders, extractors
βββ Thunderbolt.Protocols/ # Protocol handler implementations
β βββ Thunderbolt.Protocols.Abstractions/
β βββ Thunderbolt.Protocols.Http/
β βββ Thunderbolt.Protocols.Grpc/
β βββ Thunderbolt.Protocols.WebSocket/
β βββ Thunderbolt.Protocols.Mqtt/
β βββ Thunderbolt.Protocols.Amqp/
β βββ Thunderbolt.Protocols.RawSocket/
βββ Thunderbolt.Persistence/ # Marten event store, projections, read models
βββ Thunderbolt.Metrics/ # InfluxDB writer, query service, HdrHistogram
βββ Thunderbolt.Streaming/ # Kafka producer/consumer, event subscriptions
βββ Thunderbolt.Plugins/ # Plugin host, protocol registry, assembly loading
βββ Thunderbolt.Agents/ # AI agents (Semantic Kernel) β scenario generator, metrics analyst, SLO advisor
βββ Thunderbolt.Api/ # REST API, SignalR hub, middleware, authentication
βββ Thunderbolt.Coordinator/ # Coordinator node host
βββ Thunderbolt.Worker/ # Worker node host
βββ Thunderbolt.Dashboard/ # Blazor Server dashboard
tests/ # xUnit tests with FluentAssertions, NSubstitute, Testcontainers
plugins/ # Example protocol plugin
scenarios/ # Sample scenario definitions (JSON)
deploy/
βββ docker/ # Dockerfiles for each service
βββ helm/thunderbolt/ # Helm chart for Kubernetes deployment
βββ k8s/ # Raw Kubernetes manifests
- .NET 10 SDK (see
global.json) - Docker & Docker Compose
Build and publish the applications:
# Publish all services
dotnet publish src/Thunderbolt.Api -c Release -o out/api
dotnet publish src/Thunderbolt.Coordinator -c Release -o out/coordinator
dotnet publish src/Thunderbolt.Worker -c Release -o out/worker
dotnet publish src/Thunderbolt.Dashboard -c Release -o out/dashboardStart the full stack:
docker compose up --build -dThis starts:
| Service | URL |
|---|---|
| Dashboard | http://localhost:5100 |
| API | http://localhost:5000 |
| InfluxDB | http://localhost:8086 |
| PostgreSQL | localhost:5432 |
| Kafka | localhost:9092 |
The default stack includes 1 coordinator, 2 workers, 1 API, and 1 dashboard node.
Start infrastructure services only:
docker compose up postgres influxdb kafka -dThen run each service in separate terminals:
# Terminal 1 β Coordinator
dotnet run --project src/Thunderbolt.Coordinator
# Terminal 2 β Worker
dotnet run --project src/Thunderbolt.Worker
# Terminal 3 β API
dotnet run --project src/Thunderbolt.Api
# Terminal 4 β Dashboard
dotnet run --project src/Thunderbolt.DashboardConfiguration is managed via appsettings.json and environment variables. Key sections:
{
"Thunderbolt": {
"Cluster": {
"Hostname": "0.0.0.0",
"Port": 8558,
"Role": "coordinator|worker|api",
"SeedNodes": ["akka.tcp://thunderbolt@coordinator:8558"],
"UseKubernetesDiscovery": false,
"KubernetesLabelSelector": "app=thunderbolt",
"SplitBrainStrategy": "keep-majority",
"NumberOfShards": 100,
"PersistenceConnectionString": "Host=postgres;Database=thunderbolt;..."
}
}
}{
"Thunderbolt": {
"InfluxDb": {
"Url": "http://localhost:8086",
"Token": "your-token",
"Organization": "thunderbolt",
"Bucket": "metrics",
"BatchSize": 5000,
"FlushIntervalMs": 1000,
"EnableGzip": true
}
}
}{
"Thunderbolt": {
"Kafka": {
"BootstrapServers": "localhost:9092",
"GroupId": "thunderbolt-api",
"TestEventsTopic": "thunderbolt.test-events",
"MetricsTopic": "thunderbolt.metrics",
"CommandsTopic": "thunderbolt.commands"
}
}
}{
"Thunderbolt": {
"Ai": {
"Provider": "AzureOpenAI",
"ModelId": "gpt-4o",
"Endpoint": "https://your-endpoint.openai.azure.com",
"ApiKey": "your-api-key",
"MaxTokens": 4096,
"Temperature": 0.3,
"Agents": {
"ScenarioGenerator": true,
"MetricsAnalyst": true,
"SloAdvisor": true,
"TestPlanner": true
}
}
}
}
β οΈ Security: Never commit API keys or secrets. Use environment variables or a secret manager in production.
All configuration keys can be set via environment variables using the __ (double underscore) separator:
Thunderbolt__Cluster__Role=worker
Thunderbolt__InfluxDb__Token=your-token
Thunderbolt__Kafka__BootstrapServers=kafka:29092
ConnectionStrings__PostgreSQL="Host=postgres;Database=thunderbolt;..."All endpoints are prefixed with /api/v1 and require authentication (JWT Bearer in production, auto-authenticated in Development mode).
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v1/tests |
Create and start a new load test |
GET |
/api/v1/tests |
List all tests (paginated) |
GET |
/api/v1/tests/{testId} |
Get test details |
GET |
/api/v1/tests/{testId}/status |
Get live test status with real-time metrics |
POST |
/api/v1/tests/{testId}/stop |
Gracefully stop a running test |
DELETE |
/api/v1/tests/{testId} |
Cancel a test |
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v1/scenarios |
Create a new scenario |
GET |
/api/v1/scenarios |
List all scenarios |
GET |
/api/v1/scenarios/{id} |
Get scenario details |
PUT |
/api/v1/scenarios/{id} |
Update a scenario |
DELETE |
/api/v1/scenarios/{id} |
Delete a scenario |
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/v1/metrics/{testId} |
Query historical metrics from InfluxDB |
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v1/ai/generate-scenario |
Generate a scenario from natural language |
POST |
/api/v1/ai/analyze-metrics |
AI analysis of test metrics |
POST |
/api/v1/ai/slo-advisor |
Get SLO recommendations |
POST |
/api/v1/ai/compare-tests |
Compare two test runs |
| Protocol | Endpoint | Description |
|---|---|---|
| SignalR | /hubs/loadtest |
Live metrics streaming (VU count, RPS, latency percentiles, errors) |
| Prometheus | /metrics |
Prometheus scraping endpoint |
Include the tenant header in all API requests:
X-Tenant-Id: your-tenant-id
Scenarios are defined in JSON and support complex user journeys with variable extraction, data feeding, and assertions.
{
"name": "E-Commerce User Journey",
"description": "Simulates browsing, searching, and adding to cart",
"protocol": "http",
"loadProfile": {
"type": "steps",
"stages": [
{ "users": 50, "durationSeconds": 60 },
{ "users": 150, "durationSeconds": 120 },
{ "users": 300, "durationSeconds": 180 },
{ "users": 50, "durationSeconds": 60 }
]
},
"steps": [
{
"name": "Homepage",
"type": "http_request",
"method": "GET",
"url": "https://example.com",
"expectedStatusCodes": [200],
"headers": { "Accept": "text/html" },
"thinkTimeMs": 2000,
"timeoutSeconds": 15,
"extractors": [
{
"type": "regex",
"name": "csrfToken",
"pattern": "<meta name=\"csrf-token\" content=\"([^\"]+)\""
},
{
"type": "cookie",
"name": "sessionId",
"pattern": "JSESSIONID"
}
]
},
{
"name": "Add to Cart",
"type": "http_request",
"method": "POST",
"url": "https://example.com/api/cart/add",
"headers": {
"X-CSRF-Token": "{{csrfToken}}",
"Cookie": "JSESSIONID={{sessionId}}"
},
"body": "{\"productId\":\"{{productId}}\",\"quantity\":1}",
"contentType": "application/json",
"thinkTimeMs": 1500,
"extractors": [
{ "type": "json_path", "name": "cartId", "pattern": "cartId" }
]
}
],
"assertions": [
{ "type": "response_time_percentile", "percentile": 95, "maxMs": 3000 },
{ "type": "error_rate", "maxErrorRate": 0.01 },
{ "type": "throughput", "minRps": 50 }
],
"dataFeeder": {
"type": "json",
"strategy": "random",
"data": [
{ "productId": "SKU-001" },
{ "productId": "SKU-002" },
{ "productId": "SKU-003" }
]
}
}| Type | Description |
|---|---|
rampUp |
Linear ramp from 0 to N users over a duration |
constant |
Fixed number of users for a duration |
steps |
Staged increases/decreases in user count |
spike |
Sudden burst of users to test resilience |
custom |
User-defined load curve |
| Type | Description |
|---|---|
regex |
Extract values from response body via regex capture groups |
json_path |
Extract values from JSON response body |
header |
Extract values from response headers |
cookie |
Extract values from response cookies |
| Type | Description |
|---|---|
response_time_percentile |
P50/P75/P90/P95/P99 latency thresholds |
error_rate |
Maximum allowed error rate (0.0 β 1.0) |
throughput |
Minimum requests per second |
# Run all tests
dotnet test
# Run specific test project
dotnet test tests/Thunderbolt.Core.Tests
# Run with coverage
dotnet test --collect:"XPlat Code Coverage"The test suite includes:
- Unit tests β Domain models, aggregates, scenarios, protocols, metrics
- Integration tests β PostgreSQL and Kafka via Testcontainers
- Libraries β xUnit, FluentAssertions, NSubstitute
Individual Dockerfiles are provided for each service:
# Build individual images
docker build -f docker/Dockerfile.api -t thunderbolt-api .
docker build -f docker/Dockerfile.coordinator -t thunderbolt-coordinator .
docker build -f docker/Dockerfile.worker -t thunderbolt-worker .
docker build -f docker/Dockerfile.dashboard -t thunderbolt-dashboard .helm install thunderbolt deploy/helm/thunderbolt \
--namespace thunderbolt \
--create-namespace \
--set coordinator.replicas=1 \
--set worker.replicas=3 \
--set api.replicas=2The Helm chart includes:
- Coordinator Deployment (singleton pattern)
- Worker StatefulSet (scalable)
- API Deployment with Service
- Dashboard Deployment with Service
- ConfigMap for shared configuration
- RBAC for Kubernetes API discovery
Workers auto-join the cluster via seed node discovery. To scale:
# Kubernetes
kubectl scale deployment thunderbolt-worker --replicas=5 -n thunderbolt
# Docker Compose
docker compose up --scale worker-1=3 -dThe coordinator automatically redistributes virtual users when workers join or leave, including failover when a worker becomes unreachable.
| Component | Technology |
|---|---|
| Runtime | .NET 10 / C# (latest) |
| Actor System | Akka.NET 1.5 (Cluster, Sharding, Persistence, DistributedPubSub) |
| Event Store | Marten 7.x / PostgreSQL 16 |
| Time-Series DB | InfluxDB 2.7 |
| Message Broker | Apache Kafka (Confluent) |
| Dashboard | Blazor Server / MudBlazor 8.x |
| Real-Time | ASP.NET SignalR |
| AI | Microsoft Semantic Kernel 1.74 / Azure OpenAI |
| Histograms | HdrHistogram |
| Auth | JWT Bearer / OpenID Connect |
| Telemetry | OpenTelemetry + Prometheus |
| Logging | Serilog (Console + Seq) |
| Serialization | System.Text.Json / YamlDotNet |
| Testing | xUnit, FluentAssertions, NSubstitute, Testcontainers |
| Containerization | Docker, Helm, Kubernetes |
Thunderbolt captures detailed metrics for every request:
| Metric | Description |
|---|---|
DurationMs |
Total request duration |
TtfbMs |
Time to first byte |
ConnectMs |
TCP connection time |
TlsMs |
TLS handshake time |
BytesSent |
Request payload size |
BytesReceived |
Response payload size |
StatusCode |
HTTP/protocol status code |
IsError |
Error flag (status code + pattern matching) |
Aggregated metrics are computed in real-time using HdrHistogram:
- Percentiles: P50, P75, P90, P95, P99
- Throughput: Requests/second
- Error Rate: Errors / Total requests
- Min/Max/Avg: Latency statistics
Thunderbolt includes four AI-powered agents built with Microsoft Semantic Kernel:
| Agent | Description |
|---|---|
| Scenario Generator | Generates complete JSON scenario definitions from natural language descriptions |
| Metrics Analyst | Analyzes test results and provides performance insights |
| SLO Advisor | Recommends Service Level Objectives based on test data |
| Test Comparison | Compares two test runs and highlights regressions |
Configure AI by setting the Thunderbolt:Ai section in appsettings.json or via environment variables.
Create custom protocol handlers by implementing IProtocolHandler and packaging as a .NET class library:
public class MyProtocolHandler : IProtocolHandler
{
public string ProtocolName => "my-protocol";
public Task<ResponseResult> ExecuteAsync(RequestContext context)
{
// Your protocol implementation
}
}Place the compiled DLL in the plugins/ directory β Thunderbolt auto-discovers and registers it at startup.
This project is licensed under the MIT License.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Built with β‘ by the Thunderbolt Contributors