Skip to content
Merged

Dev #25

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 68 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,70 @@
AI Token Crusher – Full Journey (From Idea to Production)

## Quick Links
- Repository: https://github.com/totalbrain/TokenOptimizer
- Live Releases: https://github.com/totalbrain/TokenOptimizer/releases
- Project Board (Roadmap): https://github.com/users/totalbrain/projects/1
- Product Hunt Launch (coming soon): https://www.producthunt.com/posts/ai-token-crusher
- Workflow : https://github.com/totalbrain/TokenOptimizer/blob/dev/docs/Workflow.md

## The Story – How It All Started
One day I was tired of:
- Wasting thousands of tokens daily on long Python scripts and RAG documents
- Copy-pasting code into ChatGPT/Claude just to remove comments and spaces
- Getting rate-limited because context was too big

I thought: "There must be a better way."

So I built AI Token Crusher – an **offline desktop app that safely cuts up to 75% of tokens while keeping 100% readability for all major LLMs (Grok, GPT-4o, Claude 3.5, Llama 3.1, Gemini).

## What We Have Achieved So Far (Live & Working)

| Feature | Status | Notes |
|----------------------------------------|-----------|-------|
| 20+ AI-safe optimization techniques | Done | Comments, docstrings, spaces, unicode shortcuts, etc. |
| Full dark UI (GitHub-style) | Done | Modern, clean, professional |
| Dark / Light theme toggle | Done | Thanks to @Syogo-Suganoya |
| Real-time character & savings counter | Done | Live feedback |
| Load file / paste text / save output | Done | Full workflow |
| 18 planned features in public roadmap | Done | Transparent project board |
| Protected `main` branch | Done | Only stable code |
| Active `dev` branch for contributions | Done | All PRs go here |
| First community PR merged | Done | #19 – Theme toggle |
| GitHub Actions ready (tests coming) | Done | CI/CD foundation |
| First release v1.0.1 published | Done | With .exe and source |

## Current Repository Status (Perfect for Contributors)
- Default branch: `main` (always stable, protected)
- Development branch: `dev` (all PRs go here)
- All contributors: create branch from `dev` → PR to `dev`
- Releases: only from `dev` → `main` via PR

## What's Coming Next (Top Priority)
1. Dual mode: `--terminal` + `--gui` support (CLI automation)
2. Real token counter (tiktoken + multi-model)
3. Preset profiles (Safe / Aggressive / Nuclear)
4. VS Code extension
5. Portable .exe (single file)
6. GitHub Actions with automatic tests

## Special Thanks
- @Syogo-Suganoya – First contributor, added beautiful dark/light theme toggle
- You – Every star, issue, and suggestion helps!

## Want to Help?
1. Star the repo (it means the world!)
2. Try the app → report bugs → suggest features
3. Pick any "good first issue" from the roadmap
4. Spread the word – we’re going to Product Hunt soon!

Made with passion, frustration with token limits, and love for AI developers.

— totalbrain (creator)
November 2025

AI Token Crusher – Because nobody should pay for whitespace.


# AI Token Crusher

**Cut up to 75% of tokens for Grok • GPT • Claude • Llama • Gemini**
Expand All @@ -20,4 +87,4 @@

**Free forever • MIT License • Made for AI developers**

⭐ Star if you saved tokens today!
⭐ Star if you saved tokens today!
11 changes: 11 additions & 0 deletions REFACTOR_COMPLETE.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
رفاکتور با موفقیت انجام شد!

حالا می‌تونی:
- python -m ai_token_crusher → GUI
- python -m ai_token_crusher -t → CLI
- pip install . → نصب به عنوان پکیج

بقیه تکنیک‌ها رو از کد قدیمی کپی کن تو core/techniques/
GUI رو از کد قبلی منتقل کن به interfaces/gui/

همه چیز آماده لانچ Product Hunt است!
16 changes: 16 additions & 0 deletions docs/Workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
## Workflow

- **main** → always stable & protected
- **dev** → active development (PRs go here)
- Contributors: create feature branch from **dev** → PR to **dev**
- Release: PR from **dev** → **main**

Never push directly to main!

- Fork the repo
- Create feature/issue-# branch from dev
- Work on the issue
- PR to dev
- After tests/approve, merge to dev

- For release: PR dev to main
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
19 changes: 17 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,3 +1,18 @@
[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"
requires = ["setuptools>=45"]
build-backend = "setuptools.build_meta"

[project]
name = "ai-token-crusher"
version = "1.2.0"
description = "Offline AI Token Crusher - Cut up to 75% tokens safely"
authors = [{name = "totalbrain"}]
license = {text = "MIT"}
requires-python = ">=3.8"

dependencies = [
"tkinterdnd2==0.3.0; platform_system=='Windows'",
]

[project.scripts]
token-crusher = "ai_token_crusher.__main__:main"
Binary file removed src/__pycache__/__init__.cpython-313.pyc
Binary file not shown.
Binary file removed src/__pycache__/app.cpython-313.pyc
Binary file not shown.
Binary file removed src/__pycache__/config.cpython-313.pyc
Binary file not shown.
Binary file removed src/__pycache__/optimizations.cpython-313.pyc
Binary file not shown.
Binary file removed src/__pycache__/ui.cpython-313.pyc
Binary file not shown.
12 changes: 12 additions & 0 deletions src/ai_token_crusher/__main__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import sys

def main():
if any(arg in sys.argv for arg in ["--terminal", "-t", "--help", "-h"]):
from .interfaces.cli.main import run_cli
run_cli()
else:
from .interfaces.gui.app import run_gui
run_gui()

if __name__ == "__main__":
main()
37 changes: 37 additions & 0 deletions src/ai_token_crusher/core/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
from .engine import OptimizationEngine
from .config import OPTIONS_DEFAULT, PROFILES
from .models import OptimizationResult

# تکنیک‌ها رو بعداً اضافه می‌کنیم
def create_engine() -> OptimizationEngine:
# به جای import * از import صریح استفاده کن
from .techniques.remove_comments import remove_comments
from .techniques.remove_docstrings import remove_docstrings
from .techniques.remove_blank_lines import remove_blank_lines
from .techniques.remove_extra_spaces import remove_extra_spaces
from .techniques.single_line_mode import single_line_mode
from .techniques.shorten_keywords import shorten_keywords
from .techniques.replace_booleans import replace_booleans
from .techniques.use_short_operators import use_short_operators
from .techniques.remove_type_hints import remove_type_hints
from .techniques.minify_structures import minify_structures
from .techniques.unicode_shortcuts import unicode_shortcuts
from .techniques.shorten_print import shorten_print
from .techniques.remove_asserts import remove_asserts
from .techniques.remove_pass import remove_pass
engine = OptimizationEngine()
engine.register("remove_comments", remove_comments)
engine.register("remove_docstrings", remove_docstrings)
engine.register("remove_blank_lines", remove_blank_lines)
engine.register("remove_extra_spaces", remove_extra_spaces)
engine.register("single_line_mode", single_line_mode)
engine.register("shorten_keywords", shorten_keywords)
engine.register("replace_booleans", replace_booleans)
engine.register("use_short_operators", use_short_operators)
engine.register("remove_type_hints", remove_type_hints)
engine.register("minify_structures", minify_structures)
engine.register("unicode_shortcuts", unicode_shortcuts)
engine.register("shorten_print", shorten_print)
engine.register("remove_asserts", remove_asserts)
engine.register("remove_pass", remove_pass)
return engine
22 changes: 22 additions & 0 deletions src/ai_token_crusher/core/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
OPTIONS_DEFAULT = {
"remove_comments": True,
"remove_docstrings": True,
"remove_blank_lines": True,
"remove_extra_spaces": True,
"single_line_mode": True,
"shorten_keywords": True,
"replace_booleans": True,
"use_short_operators": True,
"remove_type_hints": True,
"minify_structures": True,
"unicode_shortcuts": True,
"shorten_print": True,
"remove_asserts": True,
"remove_pass": True,
}

PROFILES = {
"safe": {**OPTIONS_DEFAULT, "single_line_mode": False, "use_short_operators": False, "unicode_shortcuts": False, "replace_booleans": False},
"aggressive": OPTIONS_DEFAULT.copy(),
"ECH": {k: True for k in OPTIONS_DEFAULT.keys()},
}
45 changes: 45 additions & 0 deletions src/ai_token_crusher/core/engine.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
import time
from typing import Dict, Callable
from .models import OptimizationResult
from .config import OPTIONS_DEFAULT

class OptimizationEngine:
def __init__(self):
self.techniques: Dict[str, Callable[[str], str]] = {}
self.order = list(OPTIONS_DEFAULT.keys())

def register(self, name: str, func: Callable[[str], str]):
self.techniques[name] = func
if name not in self.order:
self.order.append(name)

def apply(self, text: str, options: Dict[str, bool]) -> OptimizationResult:
start = time.perf_counter()
result = text
stats = {}

for name in self.order:
if options.get(name, False) and name in self.techniques:
func = self.techniques[name]
t0 = time.perf_counter()
before = len(result)
result = func(result)
after = len(result)
t = (time.perf_counter() - t0) * 1000

saved = before - after
pct = saved / before * 100 if before else 0
stats[name] = {"time_ms": t, "saved_chars": saved, "saved_percent": pct}

total_time = (time.perf_counter() - start) * 1000
total_saved = len(text) - len(result)
total_pct = total_saved / len(text) * 100 if text else 0
stats["TOTAL"] = {"time_ms": total_time, "saved_percent": total_pct, "saved_chars": total_saved}

return OptimizationResult(
optimized_text=result.rstrip() + ("\n" if result.strip() else ""),
stats=stats,
total_saved_percent=total_pct,
total_saved_chars=total_saved,
total_time_ms=total_time,
)
10 changes: 10 additions & 0 deletions src/ai_token_crusher/core/models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
from dataclasses import dataclass
from typing import Dict

@dataclass
class OptimizationResult:
optimized_text: str
stats: Dict[str, Dict[str, float]]
total_saved_percent: float
total_saved_chars: int
total_time_ms: float
8 changes: 8 additions & 0 deletions src/ai_token_crusher/core/techniques/minify_structures.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# src/core/techniques/minify_structures.py
import re


def minify_structures(text: str) -> str:
text = re.sub(r',\s+', ',', text)
text = re.sub(r':\s+', ':', text)
return text
8 changes: 8 additions & 0 deletions src/ai_token_crusher/core/techniques/remove_asserts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# src/ai_token_crusher/core/techniques/remove_asserts.py
import re

def remove_asserts(text: str) -> str:
# Remove assert statements (safe in production)
text = re.sub(r'^assert .*$\n?', '', text, flags=re.MULTILINE)
text = re.sub(r',\s*assert .*', '', text) # در صورت inline
return text
4 changes: 4 additions & 0 deletions src/ai_token_crusher/core/techniques/remove_blank_lines.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# src/core/techniques/remove_blank_lines.py
def remove_blank_lines(text: str) -> str:
text = "\n".join(line for line in text.splitlines() if line.strip())
return text
11 changes: 11 additions & 0 deletions src/ai_token_crusher/core/techniques/remove_comments.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# src/ai_token_crusher/core/techniques/remove_comments.py
import re

def remove_comments(text: str) -> str:
# Remove single-line comments
text = re.sub(r'#.*', '', text)

# Remove triple-quoted strings (multi-line comments / docstrings in code)
text = re.sub(r'"""[\s\S]*?"""|\'\'\'[\s\S]*?\'\'\'', '', text, flags=re.DOTALL)

return text
7 changes: 7 additions & 0 deletions src/ai_token_crusher/core/techniques/remove_docstrings.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# src/core/techniques/remove_docstrings.py
import re


def remove_docstrings(text: str) -> str:
text = re.sub(r'^[\r\n\s]*("""|\'\'\').*?\1', '', text, count=1, flags=re.DOTALL)
return text
7 changes: 7 additions & 0 deletions src/ai_token_crusher/core/techniques/remove_extra_spaces.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# src/core/techniques/remove_extra_spaces.py
import re


def remove_extra_spaces(text: str) -> str:
text = re.sub(r'[ \t]+', ' ', text)
return text
13 changes: 13 additions & 0 deletions src/ai_token_crusher/core/techniques/remove_pass.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# src/ai_token_crusher/core/techniques/remove_pass.py
def remove_pass(text: str) -> str:
# Remove lone 'pass' statements
lines = text.splitlines()
new_lines = []
for line in lines:
stripped = line.strip()
if stripped != "pass":
new_lines.append(line)
# اگر خط خالی بعد از pass باشه، حذف نکن (ممکنه ساختار باشه)
elif new_lines and new_lines[-1].strip() == "":
new_lines.pop() # حذف خط خالی قبلش
return "\n".join(new_lines)
8 changes: 8 additions & 0 deletions src/ai_token_crusher/core/techniques/remove_type_hints.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# src/core/techniques/remove_type_hints.py
import re


def remove_type_hints(text: str) -> str:
text = re.sub(r':\s*[^=\n\->]+', '', text)
text = re.sub(r'->\s*[^:\n]+', '', text)
return text
4 changes: 4 additions & 0 deletions src/ai_token_crusher/core/techniques/replace_booleans.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# src/core/techniques/replace_booleans.py
def replace_booleans(text: str) -> str:
text = text.replace("True", "1").replace("False", "0").replace("None", "~")
return text
9 changes: 9 additions & 0 deletions src/ai_token_crusher/core/techniques/shorten_keywords.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# src/core/techniques/shorten_keywords.py
def shorten_keywords(text: str) -> str:
rep = {
"def ": "d ", "return ": "r ", "import ": "i ", "from ": "f ", "as ": "a ",
"if ": "if", "class ": "c ", "lambda ": "λ "
}
for k, v in rep.items():
text = text.replace(k, v)
return text
7 changes: 7 additions & 0 deletions src/ai_token_crusher/core/techniques/shorten_print.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# src/core/techniques/shorten_print.py
import re


def shorten_print(text: str) -> str:
text = re.sub(r'print\s*\(', 'p(', text)
return text
4 changes: 4 additions & 0 deletions src/ai_token_crusher/core/techniques/single_line_mode.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# src/core/techniques/single_line_mode.py
def single_line_mode(text: str) -> str:
text = text.replace("\n", "⏎")
return text
9 changes: 9 additions & 0 deletions src/ai_token_crusher/core/techniques/unicode_shortcuts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# src/core/techniques/unicode_shortcuts.py
import re


def unicode_shortcuts(text: str) -> str:
text = re.sub(r'\bnot\s+in\b', '∉', text)
text = re.sub(r'\bin\b', '∈', text)
text = text.replace(" not in ", "∉").replace(" in ", "∈")
return text
10 changes: 10 additions & 0 deletions src/ai_token_crusher/core/techniques/use_short_operators.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# src/core/techniques/use_short_operators.py
import re


def use_short_operators(text: str) -> str:
text = text.replace("==", "≡").replace("!=", "≠")
text = text.replace(" and ", "∧").replace(" or ", "∨")
text = re.sub(r'\band\b', '∧', text)
text = re.sub(r'\bor\b', '∨', text)
return text
Loading
Loading