Conversation
This is more of a workaround than a complete solution. The tests pass even when Ruamel is installed and you can generate new transformation test results. Error test results cannot be generated with Ruamel due to Ruamel-specific scalar types
|
@a-v-popov -- this makes CI/CD tests work with ruamel, but it's an ugly workaround until we're forced to migrate to ruamel. It also makes tests run a bit slower, as the topology must be cleaned of ruamel scalar types before proceeding. |
There was a problem hiding this comment.
A few items, mostly AI/test driven, I'd suggest addressing before merging, plus two small follow-ups, I think might be helpful:
1. The detection probe matches the namespace shim, not the library
tests/utils.py does import ruamel, but ruamel is a PEP 420 namespace package — it can survive uninstalls of ruamel.yaml (the ruamel/ directory often lingers, or another package owns part of the namespace). On my system right now:
>>> import ruamel
>>> import ruamel.yaml
ModuleNotFoundError: No module named 'ruamel.yaml'
HAS_RUAMEL becomes True while Box/PyYAML disagree, and clean_ruamel runs for nothing. python-box itself probes with from ruamel.yaml import version_info, YAML (box/converters.py:22). Suggested:
try:
import ruamel.yaml # noqa: F401
HAS_RUAMEL = True
except ImportError:
HAS_RUAMEL = False2. yaml.dump should pin default_flow_style=False
Box.to_yaml(...) in python-box 7.x defaults to default_flow_style=False, width=120 and calls yaml.dump(obj, default_flow_style=default_flow_style, width=width, ...) (box/converters.py:195, box/box.py:1018). The PR replaces it with:
return yaml.dump(topology.to_dict(), width=120)PyYAML's own default for default_flow_style is None ("auto: flow for inner scalar-only collections"), not False. Today's fixtures are all under-mappings, so the two modes coincide and the suite passes. The next fixture with a top-level scalar list, or a deeply nested scalar-only structure, will silently diverge from the format every committed expected/*.yml was generated in. One-line fix:
return yaml.dump(topology.to_dict(), default_flow_style=False, width=120)3. log._ERROR_LOG = [] is redundant
The previous line was log.err_count = 0, which was a typo from the start — log.err_count doesn't exist; the public counter is get_error_count() at netsim/utils/log.py:389. The new log._ERROR_LOG = [] does target a real attribute, but run_test immediately calls log.init_log_system(header=False), which itself does _ERROR_LOG = [] (log.py:539). The reset is redundant, and reaching into a leading-underscore symbol from outside the module is a smell. Either drop the line, or call log.init_log_system(header=False) if a defensive reset is desired.
Two small follow-ups worth folding in
tests/conftest.py(new): the standard-pytest-safe way to surface "ruamel.yaml is installed, things will be slower" ispytest_configure+config.issue_config_time_warning(UserWarning(...)). It appears in pytest's warnings summary, never fails a run, and doesn't touch collection /-kfiltering / parallelism. ImportHAS_RUAMELfromtests/utils.pyso there's a single source of truth.- Gate the fixture generators. The PR docs already say error-log fixtures can't be generated under ruamel, but
tests/create-error-tests.shdoesn't enforce it — a user can still run it and commit broken.logfiles. A 4-linepython3 -c "import ruamel.yaml" && exit 2guard at the top closes that hole.tests/create-transformation-test-case.pydoes work under ruamel (theclean_ruamelwalk handles it), so a stderr advisory there is enough.
Reference implementation of all of the above on a-v-popov/netlab@pr-3353-followup (compare view) — verified end-to-end:
- Without
ruamel.yaml: xform + error tests pass; no warning. - With
ruamel.yaml0.19.1: xform + error tests pass (+~40% wall time, matching the "slower" claim);UserWarningappears in pytest's warnings summary; nothing fails or skips. create-error-tests.shwithruamel.yamlinstalled: exits 2 with an actionable message before doing anything destructive.ruff check .clean.
Happy to open the follow-up as a separate PR against dev after this lands, or fold the changes in here if you prefer.
Awesome job. Thanks a million. What AI model/harness were you using?
It would be great if you could either add changes to this PR or submit a PR against this branch, so that we end with a single commit in the dev branch. |
That would be Claude Code running in tmux on remote VM in a cloud and using official Opus 4.7 model at xhigh effort, if I understood the question correctly.
Submitted #3362 |
This is more of a workaround than a complete solution. The tests pass even when Ruamel is installed and you can generate new transformation test results. Error test results cannot be generated with Ruamel due to Ruamel-specific scalar types