whoami
> Leo Camus -- @Dev-next-gen
> Paris, France
> Self-taught. No degree. Full stack from silicon to inference.
I build AI systems end-to-end -- from GPU kernel tuning to autonomous multi-agent pipelines. I run 80B parameter models locally. I automate offensive security at scale. I don't wait for the right tools to exist.
AI Infrastructure & LLM:
llama.cpp - Ollama - LangChain - Multi-agent orchestration
Local inference -- Qwen3-Coder 80B + 14B - ctx 262K - ROCm
Offensive Security:
Bug bounty automation - Pentest pipelines - CVSS analysis
nuclei - subfinder - katana - httpx - dalfox - jwt_tool
GPU & Compute:
AMD / NVIDIA multi-GPU - ROCm - OpenCL - Vulkan
Kernel-level tuning - Inference optimization
Systems & Backend:
Linux (Ubuntu / Kali) - Python - Node.js - Rust - API REST
MySQL - PostgreSQL - Supabase - self-hosted infra
OSINT & Geopolitics:
Open-source intelligence pipelines - GDELT - ACLED
Real-time geospatial analysis - Sanctions mapping
[ WIP ] Geopolitical OSINT Platform β open-source Palantir alternative
=> Replaces classified sources with 6 tiers of legally-exploitable open data
=> GDELT - ACLED - SIPRI - Sentinel Hub - Copernicus - NASA FIRMS - ADS-B Exchange - OpenSanctions
=> LLM pipeline: multi-source ingestion -> entity graph -> geostrategy briefs
-> 30+ integrated sources - real-time - fully local
-> github.com/Dev-next-gen/osint-platform
[ PROD ] OpenClaw β Autonomous bug bounty pipeline
=> 4 specialized LLM agents: recon - scan - analysis - reporting
=> Qwen3-Coder 80B (recon/scan) + Qwen3-14B (orchestration/analysis)
=> Full pipeline: subfinder -> httpx -> katana -> nuclei -> CVSS -> HackerOne report
=> Tested on real targets - rewards up to $10,000 Critical
-> 100% local - multi-GPU - autonomous
-> github.com/Dev-next-gen/distributed-agent-runtime
[ LIVE ] AI Orchestrator
=> LLM orchestration β 80B strategy + 14B execution + quality control loops
=> STRATEGY pattern - score threshold - auto retry
-> github.com/Dev-next-gen/ai-orchestrator
[ LIVE ] Multi-Agent Framework
=> Lightweight multi-agent framework β ACP protocol, task graph, role-based agents
=> Tool registry - parallel execution - local LLM backends
-> github.com/Dev-next-gen/multi-agent-framework
[ LIVE ] GPU Cluster Lab
=> AMD/NVIDIA GPU cluster infrastructure β ~300 GPU deployment experience
=> ROCm kernel tuning - flash attention - multi-node benchmarking
-> github.com/Dev-next-gen/gpu-cluster-lab
[ LIVE ] Local LLM Stack
=> Production-grade local LLM deployment β llama.cpp, Ollama, GGUF, ROCm AMD
=> 14B to 80B - zero cloud dependency - benchmarked configs
-> github.com/Dev-next-gen/local-llm-stack
contact
|-- email -> leo.camus23@gmail.com
|-- linkedin -> linkedin.com/in/leo-camus-4bb480304
+-- site -> nextgen-labs.net
Open to freelance missions, research collabs, or projects that shouldn't exist yet.
"I don't wait for technologies. I build them faster."
