ENH: Machine-readable validate output with store/reload#1822
ENH: Machine-readable validate output with store/reload#1822yarikoptic wants to merge 23 commits intomasterfrom
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1822 +/- ##
==========================================
+ Coverage 75.13% 76.28% +1.15%
==========================================
Files 84 87 +3
Lines 11931 12457 +526
==========================================
+ Hits 8964 9503 +539
+ Misses 2967 2954 -13
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
4894aa1 to
2ddd5d4
Compare
|
|
||
| # First produce a JSONL to load | ||
| outfile = tmp_path / "input.jsonl" | ||
| r = CliRunner().invoke( |
Check warning
Code scanning / CodeQL
Variable defined multiple times Warning test
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 8 hours ago
In general, to fix "variable defined multiple times" issues in this pattern, you either remove the redundant earlier assignment or, if its result should be checked, add the appropriate usage (e.g., assertions) between the assignments. The goal is to ensure every assignment either contributes to program behavior or is removed.
Here, the best fix without changing functionality is to stop assigning the result of the first CliRunner().invoke(...) to r, since r is not used before being reassigned. We still need to perform that first invocation to generate outfile, so we should keep the call but drop the r = part. Concretely, in dandi/cli/tests/test_cmd_validate.py within test_validate_auto_companion_skipped_with_load, change the line:
r = CliRunner().invoke(
main, ["validate", "-f", "json_lines", "-o", str(outfile), str(simple2_nwb)]
)to:
CliRunner().invoke(
main, ["validate", "-f", "json_lines", "-o", str(outfile), str(simple2_nwb)]
)No imports, helper methods, or other definitions are needed; this is a localized change to that one assignment.
| @@ -880,7 +880,7 @@ | ||
| """--load suppresses auto-save companion.""" | ||
| # First produce a JSONL to load | ||
| outfile = tmp_path / "input.jsonl" | ||
| r = CliRunner().invoke( | ||
| CliRunner().invoke( | ||
| main, ["validate", "-f", "json_lines", "-o", str(outfile), str(simple2_nwb)] | ||
| ) | ||
| assert outfile.exists() |
| """ | ||
| # Avoid heavy imports by importing with function: | ||
| from ..upload import upload | ||
| from ..upload import upload as upload_ |
There was a problem hiding this comment.
This is better of course than clobbering the namespace, but in the long term a better name that distinguishes API vs CLI functions (with CLI usually marked private since it is odd to expose them to library) is still preferred
Which would be a breaking change, not saying do it here, just thinking out loud
There was a problem hiding this comment.
yeah -- typically we just clobbered click's interfaces (sorry didn't interleave... but at least manually pruned some unrelated)
❯ grep -l 'from \.\.\(.*\) import \1' dandi/cli/cmd_* | xargs grep '^def '
dandi/cli/cmd_delete.py:def delete(paths, skip_missing, dandi_instance, force, devel_debug=False):
dandi/cli/cmd_download.py:def download(
dandi/cli/cmd_move.py:def move(
dandi/cli/cmd_organize.py:def organize(
dandi/cli/cmd_upload.py:def upload(
dandi/cli/cmd_validate.py:def validate_bids(
dandi/cli/cmd_validate.py:def validate(
❯ grep 'from \.\.\(.*\) import \1' dandi/cli/cmd_*
dandi/cli/cmd_delete.py: from ..delete import delete
dandi/cli/cmd_download.py: from .. import download
dandi/cli/cmd_move.py: from .. import move as move_mod
dandi/cli/cmd_organize.py: from ..organize import organize
dandi/cli/cmd_upload.py: from ..upload import upload
dandi/cli/cmd_validate.py:from ..validate import validate as validate_but frankly and unfortunately here it doesn't matter much since those click functions are not usable as python interfaces! I wish it was otherwise. So, no point of giving them any special names really.
There was a problem hiding this comment.
but frankly and unfortunately here it doesn't matter much since those click functions are not usable as python interfaces!
How do you mean? Any other library can import them and manipulate the click groups; sometimes that might even be intentional, but I don't think so here
from dandi.cli.command import main
from dandi.cli.cmd_upload import upload # Implying it is not private and intended to be imported and customized
import click
@main.command("upload2") # Or even re-register under the same name if attempting some nasty injection
@click.pass_context
def wrapped_original(ctx):
click.echo("Before original") # Inject custom code here
ctx.invoke(upload)
click.echo("After original") # Inject more custom code here
dandi/cli/cmd_upload.py
Outdated
| jobs, jobs_per_file = jobs_pair | ||
|
|
||
| upload( | ||
| sidecar = None |
There was a problem hiding this comment.
IDK about referring to this as a 'sidecar' since that could get confusing with BIDS language
What is meant is, specifically, might be 'persistent file recording the validation results of this Dandiset'?
Even referring to it as a 'log' could get confusing with our own log file terminology (as in, the ones that contain runtime errors rather than validations)
There was a problem hiding this comment.
indeed there is a clash with "sidecar" in BIDS but overall it is the same meaning, just rarely used outside of BIDS. ... but here we could just call it validation_log_path for the variable.
as for the helper , ATM no critically better name instead of current validation_sidecar_path comes to mind.
note that the validation log is not necessarily of a dandiset -- could be of nwb files AFAIK, or potentially multiple dandisets... in this case what matters is that it is associated with the same run for which there is a log.
There was a problem hiding this comment.
Contemplated on this and decided to go with "companion" term to describe such files which IMHO is synonymous to sidecar in its meaning here. So, potentially, we could have multiple companion files to accompany .log file
There was a problem hiding this comment.
"Companion" works
I understand your meaning of sidecar here more since completing the review
But I would still like to avoid too many overlapping terms for things in our ecosystem, so "companion" it is
dandi/organize.py
Outdated
| yaml_load, | ||
| ) | ||
| from .validate_types import ( | ||
| from .validate.types import ( |
There was a problem hiding this comment.
Though I do believe these import changes could have been their own PR (which would have been much easier and faster to review and merge)
There was a problem hiding this comment.
IIRC I did it in a single commit, so moves are tracked etc. could prep as a separate PR if needed, or just mark files "viewed" with such to hide away for now
There was a problem hiding this comment.
or just mark files "viewed" with such to hide away for now
That is exactly what I do
Just letting you know for 'next time' (if there is)
| class TruncationNotice: | ||
| """Placeholder indicating omitted results in truncated output.""" | ||
|
|
||
| omitted_count: int |
There was a problem hiding this comment.
Please add a description of this dataclass field to the docstring of the class
There was a problem hiding this comment.
I think so far we never did that in this code base... we had discussion with John awhile back about that and agreed to follow some convention like
dandi/move.py:@dataclass
dandi/move.py-class Movement:
dandi/move.py- """A movement/renaming of an asset"""
dandi/move.py-
dandi/move.py- #: The asset's original path
dandi/move.py- src: AssetPath
dandi/move.py- #: The asset's destination path
dandi/move.py- dest: AssetPath
dandi/move.py- #: Whether to skip this operation because an asset already exists at the
dandi/move.py- #: destination
dandi/move.py- skip: bool = False
so we do get properly annotated within our sphinx docs on RTD: https://dandi.readthedocs.io/en/latest/modref/generated/dandi.move.html#dandi.move.Movement
There was a problem hiding this comment.
There are established ways of doing it in pydocstyle (Google or NumPy); just asking for whatever gets the information to the developer at the end of the day
dandi/cli/cmd_validate.py
Outdated
| type=click.Choice(["human", "json", "json_pp", "json_lines", "yaml"]), | ||
| default="human", |
There was a problem hiding this comment.
"human" is a bit of an odd value to specify here, especially against the others
Other validators refer to this as a 'summary' or something like that, right?
There was a problem hiding this comment.
we could call it text or just default... I will go for text
There was a problem hiding this comment.
I saw render functions below, so came to mind rendered but that would be "suggesting more than it is"... I feel text is ok
There was a problem hiding this comment.
text is good and accurate 👍
| if not paths: | ||
| paths = (os.curdir,) | ||
| # below we are using load_namespaces but it causes HDMF to whine if there | ||
| # is no cached name spaces in the file. It is benign but not really useful | ||
| # at this point, so we ignore it although ideally there should be a formal | ||
| # way to get relevant warnings (not errors) from PyNWB | ||
| ignore_benign_pynwb_warnings() |
There was a problem hiding this comment.
Again, pointing out that if this PR had been broken into smaller modular PRs, the negative breaks of this changelog may have aligned better with the relevant changes to catch any relevant drops or alterations from current behavior
dandi/cli/cmd_validate.py
Outdated
| filtered = _filter_results(results, min_severity, ignore) | ||
|
|
||
| if output_format == "human": | ||
| _render_human(filtered, grouping, max_per_group=max_per_group) |
There was a problem hiding this comment.
The odd choice of 'human' for the value, as mentioned above, is especially observed here. This function is not creating Soylent Green.
Please choose a better name, such as '_format_summary' or '_generate_summary'
There was a problem hiding this comment.
note: there is a dedicate summary option to add summary statement at the bottom!
There was a problem hiding this comment.
Hmmm so the 'text' mode is not truly a summary/aggregate, just 'more human-readable' styling than JSON?
Still reports all invalidations? (though subject to filtering/grouping rules)?
| def _get_formatter( | ||
| output_format: str, out: IO[str] | None = None | ||
| ) -> JSONFormatter | JSONLinesFormatter | YAMLFormatter: |
There was a problem hiding this comment.
Seems odd for there not to be a 'Human (pending rename) Formatter' here to make clearer what the subformat even means
|
Completed first round of review Some additional requests: Your examples in the PR description are very useful to see, but they might not persist as 'truth' forever All the current tests are effectively low-level unit tests asserting direct small aspects against sidecar contents So I would LOVE to see at least one integration test that actually compares the full resulting output file content against an expected case (ideally one per supported format). This also makes it much easier to quickly show other people what the output is expected to look like (and easy to copy/paste into a presentation!) Also: the code for the new Lastly - maybe the last time I will harp on it - massive slop PRs like this make it harder for YOU (and your agent) to quickly and efficiently address all requested changes at once. Parts of this PR could have been merged directly, others could be patched up with AI in a matter of minutes, but the entire submission done together (not to mention further rounds of review) makes the process more arduous. |
- Rename "human" output format to "text" throughout cmd_validate and tests (Click option, default values, function names _render_human → _render_text, _render_human_grouped → _render_text_grouped, test names, docstrings) - Add field docstring to TruncationNotice.omitted_count - Fix CodeQL warning: remove unused `r =` assignment in test_validate_load - Use match statements in _get_formatter and _group_key - Simplify cmd_upload sidecar path derivation to conditional expression - Merge implicit string concatenation in validate/io.py warning Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
| def _get_formatter( | ||
| output_format: str, out: IO[str] | None = None | ||
| ) -> JSONFormatter | JSONLinesFormatter | YAMLFormatter: |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 8 hours ago
In general, to address "explicit returns mixed with implicit (fall through) returns", ensure that every function with explicit return statements also has an explicit return at the end, even if it's just return None (or another appropriate sentinel) and even if that line is theoretically unreachable.
For _get_formatter, the best fix that does not change functionality is to add an explicit return at the end of the function with a value that is consistent with the annotated return type. Since all valid paths already return or raise, this final return will never be executed in practice; its purpose is just to satisfy static analysis. Given the return type JSONFormatter | JSONLinesFormatter | YAMLFormatter, the most appropriate explicit return is to raise an error or return a value of that union. We already raise a ValueError in the default case. To avoid changing behavior, we should not alter that; instead, we can add a final return JSONFormatter(out=out) as a safe, unreachable default (or, if desired, choose one of the existing default behaviors). This keeps behavior the same for all reachable paths and silences the warning. The edit is confined to dandi/cli/cmd_validate.py, in the _get_formatter function, by appending one line after the match block.
No new methods or imports are needed; we already import JSONFormatter at the top of the file.
| @@ -368,7 +368,12 @@ | ||
| case _: | ||
| raise ValueError(f"Unknown format: {output_format}") | ||
|
|
||
| # Fallback return to satisfy static analysis; all valid paths above either | ||
| # return a formatter or raise ValueError, so this line is not expected | ||
| # to be reached at runtime. | ||
| return JSONFormatter(out=out) | ||
|
|
||
|
|
||
| def _render_structured( | ||
| results: list[ValidationResult], | ||
| output_format: str, |
| raise SystemExit(1) | ||
|
|
||
|
|
||
| def _group_key(issue: ValidationResult, grouping: str) -> str: |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
Copilot Autofix
AI about 8 hours ago
Copilot could not generate an autofix suggestion
Copilot could not generate an autofix suggestion for this alert. Try pushing a new commit or if the problem persists contact support.
dandi/cli/cmd_validate.py
Outdated
| and filtered | ||
| and (obj := getattr(ctx, "obj", None)) is not None | ||
| ): | ||
| _auto_save_companion(filtered, obj.logfile) |
There was a problem hiding this comment.
The companion is saved from filtered (post-filter results) rather than the raw results list. This undermines the purpose of --load: if a user runs dandi validate --min-severity ERROR, only ERROR+ records end up in the companion. When they later do dandi validate --load companion.jsonl --min-severity HINT, the HINT issues are permanently gone.
The unfiltered results list — collected before _filter_results() is applied — should be what gets written here. Filtering should be a rendering concern, not a persistence concern.
There was a problem hiding this comment.
great catch! moreover, makes no sense to save what is loaded! let's move it up right after _collect_results and simplify conditioning to match description in --help
…e/ subpackage Pure file move with no content changes, plus __init__.py re-exports for backward compatibility. Imports will be updated in the next commit. Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
Update imports across 13 files to use the new subpackage structure: - dandi.validate_types → dandi.validate.types - dandi.validate → dandi.validate.core (for explicit imports) - Relative imports adjusted accordingly Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
- test_validate.py → dandi/validate/tests/test_core.py - test_validate_types.py → dandi/validate/tests/test_types.py - Update relative imports in moved test files - Fix circular import: don't eagerly import core in __init__.py Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
…ate CLI Decompose the monolithic validate() click command into helpers: - _collect_results(): runs validation and collects results - _filter_results(): applies min-severity and ignore filters - _process_issues(): simplified, no longer handles ignore (moved to _filter) No behavior changes; all existing tests pass unchanged. Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
Design plan for enhancing `dandi validate` with: - Structured output formats (-f json/json_pp/json_lines/yaml) - Auto-save _validation.jsonl sidecar alongside .log files - --load to reload/re-render stored results with different groupings - Upload validation persistence for later inspection - Extended grouping options (severity, id, validator, standard, dandiset) - Refactoring into dandi/validate/ subpackage (git mv separately) - _record_version field on ValidationResult for forward compatibility - VisiData integration via native JSONL support Addresses #1515, #1753, #1748; enhances #1743. Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
Add record_version: str = "1" for forward-compatible serialization. Uses no underscore prefix since Pydantic v2 excludes underscore-prefixed fields from serialization. Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
Add -f/--format {human,json,json_pp,json_lines,yaml} to produce
structured output using existing formatter infrastructure. Structured
formats suppress colored text and 'No errors found' message. Exit
code still reflects validation results.
Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
- Create dandi/validate/io.py with write/append/load JSONL utilities and validation_sidecar_path() helper - Add -o/--output option to write structured output to file - Auto-save _validation.jsonl sidecar next to logfile when using structured format without --output Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
Add --summary/--no-summary flag that shows statistics after validation: total issues, breakdown by severity, validator, and standard. For human output, printed to stdout; for structured formats, printed to stderr. Also refactors _process_issues into _render_human (no exit) + _exit_if_errors, keeping _process_issues as backward-compatible wrapper. Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
Add --load to reload previously-saved JSONL validation results and re-render them with different formats/filters/grouping. Mutually exclusive with positional paths. Exit code reflects loaded results. Skip auto-save sidecar when loading. Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
- Add validation_log_path parameter to upload() - In upload validation loop, append results to sidecar via append_validation_jsonl() when validation_log_path is set - CLI cmd_upload derives sidecar path from logfile and passes it Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
Fix mypy errors by using IO[str] instead of object for file-like output parameters in _print_summary, _get_formatter, and _render_structured. Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
When --output is given without explicit --format, infer the format from the file extension: .json → json_pp, .jsonl → json_lines, .yaml/.yml → yaml. Error only if extension is unrecognized. Update design doc to reflect this behavior. Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
Add severity, id, validator, standard, and dandiset as --grouping options. Uses section headers with counts (e.g. "=== ERROR (5 issues) ===") for human output. Structured output is unaffected (always flat). Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
Limit how many results are shown per leaf group (or in the flat list
when no grouping is used). Excess results are replaced by a
TruncationNotice placeholder — a distinct dataclass (not a
ValidationResult) so consumers can isinstance() check.
- TruncationNotice dataclass + LeafItem/TruncatedResults type aliases
- _truncate_leaves() walks the grouped tree, caps leaf lists
- Human output: "... and N more issues" in cyan
- Structured output: {"_truncated": true, "omitted_count": N} sentinel
- Headers show original counts including omitted items
- Works without grouping (flat list) and with multi-level grouping
Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
The _auto_save_sidecar() call was only in the structured-to-stdout branch, so the default human format (the most common usage) never wrote the _validation.jsonl sidecar next to the log file. Move the sidecar write and _exit_if_errors() into a shared path that runs after all rendering branches. The sidecar is now written whenever there are results, unless --output or --load is active. Also update the validate docstring/help text to document the sidecar behavior, and update the design spec (Phase 1b, Phase 3, testing strategy) to reflect the --validation-log CLI option for upload and proper CLI integration testing via CliRunner through main(). Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
- Rename "human" output format to "text" throughout cmd_validate and tests (Click option, default values, function names _render_human → _render_text, _render_human_grouped → _render_text_grouped, test names, docstrings) - Add field docstring to TruncationNotice.omitted_count - Fix CodeQL warning: remove unused `r =` assignment in test_validate_load - Use match statements in _get_formatter and _group_key - Simplify cmd_upload sidecar path derivation to conditional expression - Merge implicit string concatenation in validate/io.py warning Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
…ave companion unfiltered in cmd_validate In BIDS, "sidecar" specifically refers to .json files accompanying data files. Rename all internal references to the _validation.jsonl file from "sidecar" to "companion" to avoid confusion: - validation_sidecar_path() → validation_companion_path() - _auto_save_sidecar() → _auto_save_companion() - Variable names, docstrings, comments, and spec prose The only remaining "sidecar" reference is in validate/types.py where it correctly describes BIDS sidecar JSON files. Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
Extend grouping test coverage from only severity to all grouping values and composite (multi-level) grouping specs: - Parametrize text and JSON CLI tests with 8 specs each: 5 single values (severity, id, validator, standard, dandiset) + 3 composite (severity+id, validator+severity, id+validator) - Parametrize --load and --output tests with single and composite specs - Add _grouping_opts() helper to compose -g args, reused across tests - Assert known issue ID (DANDI.NO_DANDISET_FOUND) in output - Assert nested indentation for composite groupings in text format - Assert nested dict structure for composite groupings in JSON format Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
Eliminate duplication between write_validation_jsonl and
append_validation_jsonl by adding a keyword-only `append` parameter
to write_validation_jsonl. The two functions had identical bodies
differing only in the file open mode ("w" vs "a").
Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
352b7f5 to
9881001
Compare
Refactoring of codebase into dandi/validate/ subpackage for larger #1822
Pure git mv to preserve rename tracking. Import updates follow in the next commit. Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
- Update all import paths to use _io, _core, _types across 16 files - Add io functions to __init__.py re-exports and __all__ - Change load_validation_jsonl from variadic *paths to Iterable[paths] - Move record_version check from io loader into ValidationResult.model_post_init (fires regardless of load method) Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
Summary
Design plan for machine-readable
validateoutput with store/reload capability. Adds structured output formats, automatic persistence of validation results alongside log files, and the ability to reload and re-render results with different grouping/filtering options.Key design decisions:
All structured formats (json, json_pp, json_lines, yaml) emit a uniform flat list of
ValidationResultrecords — no envelope/non-envelope splitJSONL as the primary interchange format:
cat/jq/grep/vd(VisiData) composable_record_versionfield on each record for forward-compatible deserializationGrouping affects human display only; structured output is always a stable flat schema
Auto-save
_validation.jsonlsidecarcompanion next to existing.logfilesRefactor
dandi/validate.py+dandi/validate_types.pyintodandi/validate/subpackageCloses validate: Add -f|--format option to optionally serialize into json, json_pp, json_lines or yaml #1515
Closes Provide easy means for introspecting upload validation failures #1753
Largely replaces Add filtering of issues by type/ID or by file location #1743
Enhances Add filtering of issues by type/ID or by file location #1743, upload,validate: Add --validators option #1737, Tidy up the
validatecommand function incmd_validate.py#1748TODO
dandi/validate/subpackage (git mvcommitted separately from import updates)cmd_validate.py— extract_collect_results(),_filter_results(),_render_results()_record_versiontoValidationResult--format(-f) option:human|json|json_pp|json_lines|yaml--output(-o) + auto-save_validation.jsonlcompanion file--summaryflag--load(multiple paths, mutually exclusive with positional args)dandi uploadseverity,id,validator,standard,dandiset--max-per-grouptruncation — cap results per leaf group with placeholder notice--max-per-groupfeature (Step 5)Limits how many results are shown per leaf group (or in the flat list when no grouping). Excess results are replaced by a
TruncationNoticeplaceholder — a distinct data structure (not aValidationResult), so it won't be confused with real results if the output is saved/reloaded.Examples (against 147k+ validation results from bids-examples)
Flat truncation —
--max-per-group 5with no grouping:Grouped truncation —
-g severity --max-per-group 3:and actually those are colored if output is not redirected
Multi-level leaf-only truncation —
-g severity -g id --max-per-group 2:Structured output —
-g severity -f json_pp --max-per-group 2emits_truncatedplaceholders:{ "ERROR": [ { "id": "DANDI.NO_DANDISET_FOUND", "severity": "ERROR", ... }, { "id": "BIDS.NIFTI_HEADER_UNREADABLE", "severity": "ERROR", ... }, { "_truncated": true, "omitted_count": 9567 } ], "HINT": [ { "id": "BIDS.JSON_KEY_RECOMMENDED", "severity": "HINT", ... }, { "id": "BIDS.JSON_KEY_RECOMMENDED", "severity": "HINT", ... }, { "_truncated": true, "omitted_count": 138015 } ] }Headers show original counts (e.g. "9569 issues") even when only a few are displayed. The
_truncatedsentinel follows the_record_versionnaming convention for metadata fields.Test plan
--formatoutput viaclick.CliRunnerValidationResultJSONL--loadwith multi-file concatenation, mutual exclusivity enforcement--outputis used--max-per-groupflat truncation, grouped truncation, multi-level, JSON placeholder, no-truncation when under limit_truncate_leaves()helperSome demos
See also
dandi validateacross bids-examples and then using visidata for navigation of composite dump of recordsTODOs
validatewithout-odoes not store the_validation.jsonlGenerated with Claude Code