Skip to content

Conversation

@joshsny
Copy link
Contributor

@joshsny joshsny commented Jan 12, 2026

I’m going to be looking at the cloud experience in array next, so wanted to bash out some thoughts on the architecture.

Will use this as a place to iterate and get thoughts.

@joshsny joshsny requested a review from a team as a code owner January 12, 2026 16:40
@charlesvien charlesvien changed the title cloud architecture plan: cloud architecture Jan 13, 2026
Copy link
Contributor

@jonathanlab jonathanlab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice work on this! Left some questions about the replay functionality and how this ties into our current set up with ACP.

- Review changes, pull them locally, continue
- Feels like delegating to a colleague

Most cloud agent implementations force you to choose one or the other. The goal here is to support both seamlessly—and let you switch between them without friction.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just want to flag that this goal is so consequential we could sell even a mediocre product very easily if we hit it.

@tatoalo
Copy link

tatoalo commented Jan 13, 2026

Great stuff @joshsny 🐐, really like the direction of this! 🔥

Some thoughts:

I really like using Temporal for lifecycle only, we are using it in ph-ai as hot-path message bus (with redis streams in front) and we are incurring on latency and additional complexity that here is simply not needed.

On the Content-Addressed Storage part, not super sure about a single jsonl blob that grows given possible concurrent retries/writes/failures so we could probably think at segmenting with progressive ids inside the outer run_id to have atomic writes (each segment immutable), fetching segments is more efficient for picking up what to replay on recovery.

On the "local wins" approach, I agree on the direction but perhaps we would need some kind of "caching" layer but maybe this is an early optimization problem.

Maybe too technical at this stage but the watcher may need strong ignore rules and operation batching, let's say that we run a package manager command, the file watcher would have a ton of events in the same path so we might want to avoid emitting all of those at once.

I am mostly thinking about possible ordering of changes, as in: multiple writes can arrive out of order with respect to the local client but we can define this in another passage.

Q: what kind of ack are you thinking for _posthog/ack?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants