A Tinder-like image browser for Gelbooru and Danbooru with AI-powered next image selection.
Swipe right on images you like, left on ones you don't. BooruSwipe tracks tag-level feedback and uses an LLM to continuously refine its search queries — so the longer you use it, the more it learns what you're into.
2026-04-09.7.42.34.PM.mp4
- The first batch of images is random (configurable, default: 10)
- As you swipe, BooruSwipe builds a picture of your tag preferences
- After enough swipes, it asks an LLM to generate better search tags based on what you've liked and disliked
- Those tags drive the next round of image fetching
- If no LLM is configured, it falls back to your top liked tags directly
One important nuance: BooruSwipe improves search queries, not image ranking. It's adaptive search term generation — it won't score individual images within results, just get better at finding the right pool to pull from.
You can also submit stronger feedback with the x2 buttons. Holding an x2 button opens x3 / x4 / x5 multipliers for even stronger like or dislike signals.
- Python 3.10+
- Credentials for one of:
- Gelbooru — get your API key and user ID here (bottom of the page)
- Danbooru — get your API key and username here
- An LLM (optional, but recommended):
git clone https://github.com/hlibr/BooruSwipe.git
cd BooruSwipe
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install .
cp booru.conf.example booru.confEdit booru.conf. Minimum config for each source:
Gelbooru:
BOORU_SOURCE=gelbooru
gelbooru_api_key=YOUR_GELBOORU_API_KEY
gelbooru_user_id=YOUR_GELBOORU_USER_ID
# LLM (optional)
api_key=your-api-key
base_url=https://api.openai.com/v1
model=gpt-4o-miniDanbooru:
BOORU_SOURCE=danbooru
danbooru_api_key=YOUR_DANBOORU_API_KEY
danbooru_user_id=YOUR_DANBOORU_LOGIN
# LLM (optional)
api_key=your-api-key
base_url=https://api.openai.com/v1
model=gpt-4o-miniThen run:
python -m booruswipeOpen http://localhost:8000. Give it at least 10 swipes before expecting recommendations to kick in.
For verbose logs:
python -m booruswipe --verbosecp booru.conf.example booru.conf
# edit booru.conf, then:
docker compose build
docker compose upOpen http://localhost:8000.
The compose setup mounts ./booru.conf into the container, persists the SQLite database in a named volume, and runs with --verbose.
If your LLM runs locally, don't use localhost in booru.conf — inside Docker that refers to the container itself. Use host.docker.internal instead:
# LM Studio on host
base_url=http://host.docker.internal:1234/v1
# Ollama on host
base_url=http://host.docker.internal:11434/v1If you only change booru.conf, a full rebuild isn't needed — just restart:
docker compose restartOther useful commands:
docker compose logs -f
docker compose down
docker compose down -v # also removes the database volumeAll configuration lives in booru.conf.
BOORU_SOURCE=gelbooru # or: danbooru| Provider | api_key |
base_url |
model |
|---|---|---|---|
| OpenAI | sk-... |
https://api.openai.com/v1 |
gpt-4o-mini |
| Ollama | ollama |
http://localhost:11434/v1 |
llama3.2 |
| LM Studio | lm-studio |
http://localhost:1234/v1 |
local-model |
| Setting | Required | Default | Description |
|---|---|---|---|
BOORU_SOURCE |
Yes | gelbooru |
Which booru to use (gelbooru or danbooru) |
gelbooru_api_key |
If Gelbooru | — | Gelbooru API key |
gelbooru_user_id |
If Gelbooru | — | Gelbooru user ID |
danbooru_api_key |
If Danbooru | — | Danbooru API key |
danbooru_user_id |
If Danbooru | — | Danbooru login name |
api_key |
No | — | LLM provider API key |
base_url |
No | https://api.openai.com/v1 |
LLM provider base URL |
model |
No | — | Model name for chat completions |
LLM_MIN_SWIPES |
No | 10 |
Swipes required before LLM kicks in |
LLM_MAX_TAGS |
No | 30 |
Max cumulative tags sent to the LLM |
LLM_TAG_FILTER_MIN_COUNT |
No | 1 |
Minimum tag score to include in LLM input |
LLM_USE_STRUCTURED_OUTPUT |
No | true |
Validate LLM output against response schema |
LLM_RECENT_POSITIVE |
No | 10 |
Recent positive tags sent to LLM |
LLM_RECENT_NEGATIVE |
No | 10 |
Recent negative tags sent to LLM |
LLM_RECENT_FILTER_CUMULATIVE_LIKES |
No | true |
Filter recent positives already in cumulative likes before sending to LLM |
BOORU_TAGS_PER_SEARCH |
No | 5 |
Max tags used in the primary search query |
BOORU_TAGS_PER_SEARCH_FALLBACK |
No | 3 |
Max tags used in the fallback search query |
RANDOM_IMAGE_CHANCE |
No | 5 |
% chance to show a random image instead of a recommendation |
DOUBLE_LIKED_NEVER_IGNORE |
No | false |
Exempt double-liked images from repeat filtering |
BOORU_SEARCH_LIMIT |
No | 100 |
Images requested per search page |
BOORU_SEARCH_PAGES |
No | 5 |
Pages to scan before giving up |
BOORU_SEARCH_SLEEP |
No | 0.15 |
Delay between paginated requests (seconds) |
BooruSwipe stores everything locally in a SQLite database:
swipes— each swipe event with booru source, tags, URLs, and weighttag_counts— long-term like/dislike counters per tagswiped_images— seen image IDs to reduce repeatsdouble_liked_images— IDs exempted from repeat filteringpreference_profiles— latest LLM-generated tag recommendations
pytest -q # run tests
python -m booruswipe --reset-db # wipe the local database- Single-user: session state lives in process memory, not per-user sessions
- No candidate ranking: recommendation improves search terms, not image ordering within results
- Score retrieved images instead of picking randomly from search results
- Track tag combinations, not just independent tags
- Smarter exploration beyond random chance
- Per-user session handling
- Live integration tests for Danbooru, Gelbooru, and LLM providers
MIT