Skip to content

Store Durable Object alarms per-namespace on disk#6104

Open
threepointone wants to merge 2 commits intomainfrom
alarm-storage
Open

Store Durable Object alarms per-namespace on disk#6104
threepointone wants to merge 2 commits intomainfrom
alarm-storage

Conversation

@threepointone
Copy link

(thanks opus)

Alarms in workerd are currently stored in a single global in-memory SQLite database. This means all scheduled alarms are lost on process restart, which prevents testing alarm resiliency scenarios and doesn't match production behavior.

This PR moves alarm scheduler ownership from Server into ActorNamespace, so each DO namespace gets its own AlarmScheduler backed by metadata.sqlite in the namespace's storage directory. On-disk namespaces get persistent alarms; in-memory namespaces get in-memory alarms.

Follows up on the design discussion in #605 (comment).

Design decisions

Per-namespace, not global. Kenton's original feedback on #605 was that a single global alarm database is problematic: it decouples alarm storage from namespace data, it makes splitting/combining configs lossy, and it creates the confusing possibility of on-disk DOs with in-memory alarms (or vice versa). Each namespace now owns its scheduler and stores alarms alongside its actor .sqlite files.

No new configuration. Alarm storage mode is inferred from the existing durableObjectStorage setting. If the namespace uses localDisk, alarms go on disk. If it's inMemory or none, alarms stay in memory. This was an explicit goal from the #605 discussion -- no new config knobs needed.

metadata.sqlite as the filename. Named generically (rather than alarms.sqlite) so it can hold other per-namespace metadata in the future, as Kenton suggested.

AlarmScheduler class unchanged. The class already accepted a VFS + path in its constructor, so no API changes were needed. The only change is where schedulers are created and who owns them.

Changes

  • server.h: Removed global AlarmScheduler member, startAlarmScheduler() declaration, and the alarm-scheduler.h include (moved to .c++).
  • server.c++: ActorNamespace now owns a kj::Maybe<kj::Own<AlarmScheduler>>. Created in link() using the namespace's own VFS (on-disk) or an in-memory VFS (fallback). The namespace self-registers with its scheduler at link time. Removed the global scheduler, LinkedIoChannels::alarmScheduler, and all related wiring.
  • server-test.c++: Added a new "Durable Object alarm persistence (on disk)" test that sets an alarm, tears down the server, restarts with the same storage directory, and verifies the alarm survived. Updated two existing tests whose file-count assertions now include metadata.sqlite.

Notes for reviewers

  • alarm-scheduler.h and alarm-scheduler.c++ are completely untouched.
  • The kj::systemPreciseCalendarClock() call in the ActorNamespace constructor is a pre-existing pattern -- the old startAlarmScheduler() called it the same way. Threading a calendar clock through the Server constructor would be cleaner but significantly more churn for no immediate benefit.
  • Declaration order in ActorNamespace matters: ownAlarmScheduler is declared before actors so it outlives all actors that reference it (same constraint as the existing actorStorage field).
  • The this capture in the registerNamespace lambda is safe because the scheduler is owned by the namespace and destroyed first. This matches the original pattern which captured &actorNs.

@threepointone threepointone requested review from a team as code owners February 18, 2026 14:22
@github-actions
Copy link

github-actions bot commented Feb 18, 2026

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@threepointone
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

@threepointone
Copy link
Author

recheck

github-actions bot added a commit that referenced this pull request Feb 18, 2026
@kentonv kentonv self-requested a review February 18, 2026 14:40
@threepointone
Copy link
Author

the workers-sdk test failure seems to be just about fixing the file count on the workers-sdk side
I don't understand the windows failure tho

@southpolesteve
Copy link
Contributor

/bigbonk roast this PR. Hi Sunil. testing bonk

@threepointone
Copy link
Author

/bigbonk roast this PR

@ask-bonk
Copy link
Contributor

ask-bonk bot commented Feb 25, 2026

OK, I've seen enough. Time to roast.


The Roast

"(thanks opus)" -- The PR description is literally better written than most theses I've reviewed, and you're crediting it to a chatbot. Sunil, this is the equivalent of showing up to a potluck with a Michelin-star catered dish and going "oh yeah my microwave did this."

The actual code though? Let's talk about it.

The link() method is doing a three-act play. Lines 2211-2249 of server.c++: Act 1: maybe create an on-disk scheduler. Act 2: if that didn't work, maybe create an in-memory scheduler. Act 3: if either worked, register the namespace. This method is begging to be broken up. It reads like a choose-your-own-adventure book where every path ends at the same page anyway.

You check config.tryGet<Durable>() THREE separate times in link(). Once at line 2213, again at line 2226, and a third time at line 2239. The config isn't changing between those calls. You could just... save it in a variable. The Durable config didn't run away.

The this capture in the registerNamespace lambda (line 2242) -- you wrote a whole paragraph in the PR description justifying why it's safe, which is the universal code smell for "I know this looks sketchy." The old code captured &actorNs (a reference to a local) which was also sketchy, so I guess you're maintaining tradition.

kj::systemPreciseCalendarClock() is now threaded into ActorNamespace as a const kj::Clock& member (line 2906), but you openly admit in the PR description this is tech debt you're choosing to perpetuate. Bold move to document your own sins and ship them anyway.

The test (line 2293+) sets an alarm for 4102444800000 (January 1, 2100). Optimistic that workerd will still be around in 74 years. Even more optimistic that someone won't break this test before then. At least you didn't pick Date.now() + 1000 and create a flaky test, so credit where it's due.

File count assertions -- you bumped 4 to 6 and 2 to 3 across multiple tests. These are the kind of magic-number assertions that will haunt every future contributor who adds a new file to namespace storage. The next person to touch this will spend 30 minutes figuring out why the count is off by one, look at the git blame, and curse your name.

metadata.sqlite as a filename -- the PR description says "named generically so it can hold other per-namespace metadata in the future." This is the architectural equivalent of buying a 5-bedroom house "in case we have kids someday." You might, but right now it's just one alarm table rattling around in there.

The deletion of startAlarmScheduler() and the global alarmScheduler member from Server is clean, I'll give you that. Removing ~30 lines of global state wiring and the LinkedIoChannels::alarmScheduler field is the kind of simplification that actually makes the codebase better. This is the one part Opus can't take credit for because it required understanding the existing architecture.

The facets test fix (line 5031+) where you added || name.startsWith("metadata.sqlite") to the assertion loop -- this is the "I broke an existing test and patched it" special. Functional? Yes. Elegant? It's duct tape on a tuxedo.

Bottom line: The design is sound -- per-namespace alarm storage is clearly the right call, the PR description is immaculate (thanks, Opus), and the test coverage is genuinely good. But the implementation has the telltale signs of "an AI wrote the structure and a human glued it together" -- slightly redundant control flow, a few too many "trust me, this is fine" comments, and magic numbers scattered like confetti.

7/10 -- would merge after a cleanup pass. The architecture carries it.

github run

Copy link
Member

@kentonv kentonv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for slow review.

Comment on lines 2214 to 2219
this->actorStorage.emplace(dir.openSubdir(
kj::Path({d.uniqueKey}), kj::WriteMode::CREATE | kj::WriteMode::MODIFY));

// Create per-namespace alarm scheduler backed by on-disk storage in the
// namespace directory, alongside the per-actor .sqlite files.
auto& as = KJ_ASSERT_NONNULL(this->actorStorage);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
this->actorStorage.emplace(dir.openSubdir(
kj::Path({d.uniqueKey}), kj::WriteMode::CREATE | kj::WriteMode::MODIFY));
// Create per-namespace alarm scheduler backed by on-disk storage in the
// namespace directory, alongside the per-actor .sqlite files.
auto& as = KJ_ASSERT_NONNULL(this->actorStorage);
auto& as = this->actorStorage.emplace(dir.openSubdir(
kj::Path({d.uniqueKey}), kj::WriteMode::CREATE | kj::WriteMode::MODIFY));
// Create per-namespace alarm scheduler backed by on-disk storage in the
// namespace directory, alongside the per-actor .sqlite files.


KJ_IF_SOME(d, config.tryGet<Durable>()) {
auto idFactory = kj::heap<ActorIdFactoryImpl>(d.uniqueKey);
sched->registerNamespace(d.uniqueKey,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, AlarmScheduler is currently designed to handle multiple namespaces registered at once, and to that end the table it creates has a column called actor_unique_key which identifies the DO namespace.

But now that we store a separate database per namespace, we don't really want that anymore. It's just wasting bytes and potentially causing headaches if someone wants to change keys or something.

Seems like we should update the AlarmScheduler implementation to remove the redundant functionality.

This is something that's best to do now, not later after a bunch of these tables have already been written to disk...

threepointone and others added 2 commits February 27, 2026 12:58
Previously, all alarms were stored in a single global in-memory SQLite
database, meaning they were lost on process restart. This made it
impossible to test alarm resiliency scenarios.

Move alarm scheduler ownership from Server into ActorNamespace, so each
DO namespace gets its own AlarmScheduler backed by metadata.sqlite in
the namespace's storage directory. On-disk namespaces get persistent
alarms; in-memory namespaces get in-memory alarms. No new configuration
is needed -- alarm storage follows the existing durableObjectStorage
setting.

Ref: #605 (comment)
Co-authored-by: Cursor <cursoragent@cursor.com>
Address review feedback from kentonv: since each DO namespace now has
its own separate SQLite database for alarm storage, the actor_unique_key
column in the _cf_ALARM table is redundant. Previously, AlarmScheduler
was designed to handle multiple namespaces in a single database, using
actor_unique_key to distinguish between them. Now that we store a
separate database per namespace, this column just wastes bytes and could
cause headaches if someone wants to change keys later. As kentonv noted,
this is best fixed now before any of these tables are written to disk in
the wild.

Changes:

- Simplified ActorKey struct to contain only actorId (removed uniqueKey)
- Simplified _cf_ALARM schema from (actor_unique_key, actor_id,
  scheduled_time) with a compound primary key to (actor_id, scheduled_time)
  with actor_id as the sole primary key
- Removed registerNamespace() -- the GetActorFn is now passed directly
  to the AlarmScheduler constructor, since each scheduler serves exactly
  one namespace
- Replaced the namespaces HashMap with a single GetActorFn member
- Restructured ActorNamespace::link() in server.c++ to create the
  GetActorFn once and pass it to whichever storage path (on-disk or
  in-memory) is used, eliminating duplication
- Removed unused setAlarmInMemory declaration and unused byte type alias
@threepointone
Copy link
Author

got the clanker to rebase and force push a rewrite, 🏓

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants