Skip to content

Defer fresh TUI startup hydration#21857

Draft
starr-openai wants to merge 2 commits intomainfrom
starr/startup-logging-20260508
Draft

Defer fresh TUI startup hydration#21857
starr-openai wants to merge 2 commits intomainfrom
starr/startup-logging-20260508

Conversation

@starr-openai
Copy link
Copy Markdown
Contributor

@starr-openai starr-openai commented May 9, 2026

Summary

  • keep real app-server bootstrap/model migration synchronous, then defer only interactive fresh thread/start
  • attach the background-started primary thread through the existing TUI thread routing path
  • remove the provisional bootstrap state, duplicated background bootstrap helper, and broad startup op queue from the first prototype
  • retain startup timing probes used to compare first-frame/typeable latency against background thread startup

Measurement

Warm cached runs against the opt Bazel binary:

  • first terminal output median: 472ms
  • first_frame_ready median: 778ms
  • background thread/start completion median: 1499ms
  • measured typeable-frame win versus waiting for thread startup: ~723ms median

The remaining cold-start cost is still bootstrap/model-list/account work; this version deliberately preserves that synchronous path for correctness and reviewability.

Validation

  • just fmt
  • /Users/starr/code/openai/project/dotslash-gen/bin/bazel build -c opt --bes_backend= --bes_results_url= //codex-rs/cli:codex
  • ./bazel-bin/codex-rs/cli/codex --version
  • launched the optimized TUI binary repeatedly in a PTY with startup logging; it reached background thread/start successfully and was terminated intentionally

Draft Notes

  • focused behavioral tests are still needed before marking review-ready: queued user input before async attach, attach failure display, and no behavior change for resume/fork/initial-prompt paths
  • broad startup timing probes are included for this draft measurement pass; they can be split or trimmed before review-ready if we want the PR to be purely UX behavior

starr-openai and others added 2 commits May 8, 2026 17:46
Render the fresh-start TUI before model/catalog and thread startup hydration complete. Queue early Codex ops until the primary thread is attached, then drain them after the background bootstrap finishes.

Add startup timing spans around model catalog loading, thread startup, Codex spawn, and session initialization so startup measurements can separate first-frame latency from background hydration work.

This is intentionally opened as a draft prototype. Known follow-ups include restoring startup model migration prompts and startup tooltip prefetch behavior on the deferred path, and tightening any dead-code warnings introduced by the split.

Co-authored-by: Codex <noreply@openai.com>
Keep real bootstrap and model migration synchronous, then defer only interactive fresh thread startup. Attach the background thread with the existing primary-thread routing path instead of carrying provisional bootstrap state through the TUI.

Remove the broad startup op queue and duplicated background bootstrap helper; ChatWidget already queues user text until session configuration is ready.

Co-authored-by: Codex <noreply@openai.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant