Shared Go infrastructure for local-first crawler archives.
crawlkit is not a universal Slack, Discord, Notion, or GitHub crawler. It is
the reusable foundation beneath those tools: SQLite hygiene, TOML config
defaults, portable JSONL/Gzip packing, git-backed snapshot sharing, sync state,
CLI output helpers, control/status metadata, a shared terminal explorer, and
safe desktop-cache snapshot utilities.
go get github.com/vincentkoc/crawlkit@latestGo packages are published by tagging this repository. There is no separate
package registry step. See docs/publishing.md for the release commands.
See docs/boundary.md for the crawlkit-versus-app ownership boundary.
config: standard TOML config paths, runtime dirs, and token diagnostics.store: SQLite open/read-only/transaction/query helpers.snapshot:manifest.jsonplus JSONL/Gzip table snapshot export and import.mirror: clone/init/pull/commit/push helpers for private snapshot repos.state: generic crawler cursor and freshness records.output: text/json/log output helpers.control: crawl app metadata, command manifests, status payloads, and database inventory for launchers and automation.tui: shared terminal archive explorer with gitcrawl-style responsive panes, entity/member/detail lanes, compact sortable headers, mouse selection, floating right-click actions, sorting/filtering, and local/remote source status.cache: safe read-only local cache snapshot helpers.
gitcrawlanddiscrawlconsumecrawlkitonmain.slacrawlandnotcrawlconsumecrawlkiton theirfeat/use-crawlkitintegration branches until those app rewires are merged.- The apps keep provider schemas, auth, desktop/API parsing, privacy filters,
and user-facing CLI contracts.
crawlkitowns only the reusable mechanics.
Library tests use temporary directories. They do not touch app runtime stores
such as ~/.config/gitcrawl, ~/.slacrawl, ~/.discrawl, or ~/.notcrawl.