Some of the work:
- bytelocker.el / bytelocker.nvim — buffer & region encryption inside Emacs and Neovim, with three native cipher implementations on top of a shared core.
- anki-tikz — patch on top of Anki to swap the LaTeX engine, written over a weekend so my flashcards could compile the mathematical TikZ I cared about.
- tikzjax — fork of TikZJax extended with algorithm and UML diagram support. The rendering backbone of abaj.ai, and useful to anyone running a static site that wants typeset diagrams.
- Typtel — Go macOS menu-bar telemetry app with its own Homebrew tap.
brew install --cask typtel, then read your own keystrokes back through a TUI. - chessmarkable-rpp, regenda, rmpp-koreader-plugins — chess, CalDAV, and KOReader plugins ported to the reMarkable Paper Pro. If you own one and want it to do more, this is the start.
- 3-space math map — 48 topics of mathematics laid out as a Svelte + Three.js graph: high-school through to research, with historical edges between them.
- Insufficiency of Kantian Ethics for AMA — a thorough but terse argument against rule-based artificial moral agents.
- Value Sensitive Design of GPT-3 — a human-centred-AI evaluation of ChatGPT's development and rollout.
- Perceptron — Rosenblatt's logical and historical conception. Limitations advertised, Marvin Minsky mentioned.
- Multi-layered Perceptron — the continuation: solving non-trivial non-linear problems in code.
- Solving Peg Solitaire — backtracking implementation for an old family puzzle, with a research note on when neural methods stop helping.
- Michael Nielsen's Deep Learning Book — a thorough study of feedforward networks.
- Birthday Problems — a fresh set of problems released each 26th of December to mark another year of life and study.
- KiTS19 — ranked #57 globally with 0.9129 Dice on 3D kidney + tumour segmentation; nnU-Net trained on H200 GPUs, technical report attached.
- pegs.abaj.ai, mines.abaj.ai, arcade.abaj.ai — three browser-deployed ML and multiplayer demos out of abaj.ai's 14-subdomain network.
- 10,000 Hours of ML — long-running self-directed lab; CV / NLP / RL / supervised baselines, paired with written reflection.
These days I'm marching through the Blind 75 in Python and the Hugging Face agents course.



