Add CLI commands for simulation episodes and competition pages#968
Add CLI commands for simulation episodes and competition pages#968
Conversation
Implement four new competition subcommands: - `competitions episodes` - list episodes for a submission - `competitions episode-replay` - download episode replay - `competitions episode-logs` - download agent logs for an episode - `competitions pages` - list competition pages
stevemessick
left a comment
There was a problem hiding this comment.
Looks great, at first glance. I'm still looking at the code, but the first thing I noticed is that there is no documentation. docs/competitions.md would be a good place to add some. Also, if it is relevant, I had some success using Gemini CLI to create tutorials. See docs/tutorials.md. I suspect all the docs could be generated by one tool or another.
Document the end-to-end workflow for simulation competitions: finding competitions, viewing pages, submitting agents, listing episodes, and downloading replays and agent logs.
Show single-file (main.py) and multi-file (submission.tar.gz) upload as the primary submission methods, with notebook submission as an alternative.
- Print output file path after downloading replays and agent logs - Add hint after episodes list pointing to episode-replay/episode-logs - Improve help text for submission_id, episode_id, and agent_index args
|
Thanks @stevemessick ! I added a tutorial as well as more helpful messages when using the commands |
Allows fetching a single page by name instead of all pages, e.g. `kaggle competitions pages titanic --page-name rules`.
|
One minor concern is related to usability. Other CLI commands can be abbreviated to the shortest unique prefix. The "episodes-*" sub-commands all require typing "episodes-". That's not a big deal for our (future) AI tool usage, but could be a consideration for people who don't like to type. |
stevemessick
left a comment
There was a problem hiding this comment.
It would be good to have some tests. You could add some to tests/test_commands.sh to get minimal coverage. The unit tests are more like integration tests that require a live database, so you'd probably not want to bother with that. (It would be nice, but time-consuming.)
/cc @rosbo
|
|
||
| ## 1. Find and Inspect the Competition | ||
|
|
||
| List available competitions and look for simulation competitions: |
There was a problem hiding this comment.
How is a simulation competition distinguished from other types?
Maybe add --search lux to show an example that brings up a suitable result. Or not, if it is going to expire soon. Would be nice to have an evergreen example.
| Download the competition's starter kit and any provided data: | ||
|
|
||
| ```bash | ||
| mkdir lux-ai |
There was a problem hiding this comment.
Could use ... -p lux-ai.
- Rename episode-replay/episode-logs to replay/logs for shorter typing - Add pages tests to test_commands.sh - Clarify how to identify simulation competitions in docs - Use -p flag instead of mkdir+cd for downloads in docs
Summary
kaggle competitions episodes <submission_id>to list episodes for a submission in a simulation competitionkaggle competitions episode-replay <episode_id>to download the replay JSON for an episodekaggle competitions episode-logs <episode_id> <agent_index>to download agent logs for a specific agent in an episodekaggle competitions pages [competition]to list competition pages (description, rules, evaluation, etc.), with optional --content flag to show full page contentThese commands use the ListSubmissionEpisodes, GetEpisodeReplay, GetEpisodeAgentLogs, and ListCompetitionPages APIs added to CompetitionApiService.