I build AI systems that turn hard constraints into leverage.
My work sits at the intersection of AI model research, regulated financial infrastructure, developer tooling, and trading analytics. The common thread is efficiency: systems that reduce wasted memory, wasted motion, and wasted human attention.
My private flagship project is Hieroglyphics, a patented AI model architecture built around purpose-designed visual encodings. The thesis is that models can process dense machine-readable glyph images instead of conventional text token streams, creating a path toward dramatically lower memory usage during both training and inference.
The theoretical target is up to 32x memory reduction while preserving semantic fidelity. The public version of the claim is intentionally high-level: it is a new representation and model-interface strategy for making AI systems more memory-efficient, not just another tokenizer swap.
I build local-first AI development systems with persistent memory, multi-agent workflows, cross-repo coordination, and quality gates that separate implementation from verification.
Current focus: Gald3r, a multi-IDE AI development control plane for Cursor, Claude Code, Gemini, Codex, OpenCode, and GitHub Copilot.
I work on high-reliability financial systems: ACH/NACHA, ATM and ISO 8583 messaging, Oracle and PostgreSQL platforms, Kubernetes deployments, secure integration middleware, and automation around regulated operational workflows.
I build high-performance market-data tooling, technical analysis libraries, backtesting pipelines, and automation around exchange-driven workflows. My public Python package fstrent_polars_ta provides 100+ technical indicators using Polars for significant speedups over pandas-based alternatives.
Some of my best work is intentionally not public. That includes patented model architecture, AI-assisted creative systems, and automation platforms that deliver structured insight, workflow acceleration, and decision support without exposing the underlying implementation.
| Project | What it delivers | Stack |
|---|---|---|
| Patented AI Model | Researched unique first in its class design AI model, targeting up to 32x lower memory use in training and inference across 48+ languages | AI architecture, VLMs, compression research |
| Gald3r | Persistent memory, cross-repo orchestration, and adversarial quality gates for AI coding agents | TypeScript, Python, FastMCP, Docker, PostgreSQL, pgvector |
| fstrent_polars_ta | Fast technical-analysis indicators and production-friendly market-data transforms | Python, Polars |
| Trading automation | Exchange workflows, strategy research, indicator pipelines, and backtesting infrastructure | Python, Polars, APIs |
| FinTech middleware | Payment, ATM, and enterprise integration systems that prioritize correctness and operational safety | Python, Oracle, PostgreSQL, Kubernetes |
| Private invention systems | Automation that turns complex inputs into structured, actionable outputs | AI, knowledge graphs, workflow automation |
- Make state durable. If a workflow depends on memory, that memory should live somewhere inspectable.
- Separate builder from verifier. The system that writes code should not be the only system judging it.
- Optimize the representation, not just the runtime. The biggest gains often come from changing what the model has to carry.
- Prefer explicit coordination over tribal knowledge. Repos, tasks, constraints, and deployment surfaces should know how they relate.
- Automate the boring part, but keep the human in control of irreversible decisions.
Languages and platforms: Python, TypeScript, SQL, PowerShell, FastAPI, Next.js, Docker, Kubernetes
AI systems: model architecture, VLMs, compression research, MCP, RAG, pgvector, agent orchestration, local-first memory
Data and finance: Oracle, PostgreSQL, Polars, ACH/NACHA, ISO 8583, exchange APIs
Delivery: GitHub Actions, Cloudflare Pages, OCI, Terraform, UV, CI/CD automation
If I am not at a keyboard, I am probably on a disc golf course.
