Skip to content
#

token-optimization

Here are 208 public repositories matching this topic...

14-stage Fusion Pipeline for LLM token compression — reversible compression, AST-aware code analysis, intelligent content routing. Zero LLM inference cost. MIT licensed.

  • Updated Mar 18, 2026
  • Python
RustAPI

The central goal is the 5-line API a complete, production-ready endpoint with auto-generated OpenAPI documentation, compiled validation, and distributed tracing should require no more boilerplate than the handler logic itself.

  • Updated Mar 18, 2026
  • Rust
prompt-refiner

🚀 Lightweight Python library for building production LLM applications with smart context management and automatic token optimization. Save 10-20% on API costs while fitting RAG docs, chat history, and prompts into your token budget.

  • Updated Dec 23, 2025
  • Python

Improve this page

Add a description, image, and links to the token-optimization topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the token-optimization topic, visit your repo's landing page and select "manage topics."

Learn more