[FEATURE](pyspark) PySpark validation generated from the Pydantic schema#518
Draft
Seth Fitzsimmons (sethfitz) wants to merge 7 commits into
Draft
[FEATURE](pyspark) PySpark validation generated from the Pydantic schema#518Seth Fitzsimmons (sethfitz) wants to merge 7 commits into
Seth Fitzsimmons (sethfitz) wants to merge 7 commits into
Conversation
pytest-testmon tracks which tests cover which source files and skips unaffected tests on subsequent runs. Activated via a TESTMON Makefile variable so the default `make check` uses incremental selection while `make check TESTMON=` runs the full suite. Lock the dependency in the dev group, gitignore the local cache file, and thread $(TESTMON) through the test, test-all, and test-only targets. Signed-off-by: Seth Fitzsimmons <seth@mojodna.net>
Pull the shared `dimension` and `comparison` fields of the five vehicle selector subtypes into a `VehicleSelectorBase` parent, and thread `discriminator="dimension"` through the `VehicleSelector` annotated union. The discriminator turns the union into a Pydantic discriminated union, so it serializes as JSON Schema's `oneOf` + `discriminator` rather than `anyOf`. Regenerated segment_baseline_schema.json captures the new shape. This is a prerequisite for downstream tooling that walks discriminated unions structurally (e.g. PySpark codegen for segment's nested vehicle scoping). Signed-off-by: Seth Fitzsimmons <seth@mojodna.net>
ConstraintSource now carries list_anchor_depth -- the number of list[...] layers between the field's outermost wrapper and the layer where the constraint was declared. _UnwrapState.add_constraint populates it from the unwrapper's current list_depth, so a constraint attached to the inner layer of list[Annotated[list[T], MinLen(1)]] is distinguishable from one declared at the outer wrapper instead of collapsing into an identical descriptor. Field-level metadata surfaced by Pydantic is anchored at depth 0; a comment in _merge_field_metadata records this invariant. The default of 0 keeps existing consumers unaffected. Downstream codegen can dispatch on the residual depth (ti.list_depth - cs.list_anchor_depth) to tell stacked list and string constraints apart. Signed-off-by: Seth Fitzsimmons <seth@mojodna.net>
Replace the Tonga-based Division/DivisionArea/DivisionBoundary fixtures with Kauaʻi County samples that exercise admin_level, capital_division_ids, wikidata, and source license alongside the existing fields. Replace the Tonga-based Connector/Segment fixtures with a Vermooten Street junction in Pretoria that exercises access_restrictions with when.vehicle, speed_limits with when.heading, routes with ref, road_surface, and multi-source attribution. Reformat the TOML with 4-space indents and sorted keys to match sibling theme packages. Signed-off-by: Seth Fitzsimmons <seth@mojodna.net>
Introduce overture-schema-pyspark, a runtime PySpark validation
package whose per-feature expression modules and conformance tests
are generated from the same Pydantic models that define the schema,
along with an `overture-validate` CLI.
Runtime (overture-schema-pyspark/src/overture/schema/pyspark/):
- check.py — Check, CheckShape, FeatureValidation dataclasses.
- schema_check.py — write-first comparison of Spark schemas against
an expected StructType, with structural type matching and
SchemaMismatch reporting.
- validate.py — public API: validate_feature(), evaluate_checks(),
explain_errors(). The explain stage UNPIVOTs per-row check results
into one row per violation, preserving all input columns for
downstream join-back.
- cli.py — `overture-validate <parquet-or-directory>` runs the
validation pipeline against a path of GeoParquet files. Output is
one row per violation: feature ID, theme/type, failing field,
check name, offending value. Single-pass evaluation keeps memory
bounded for arbitrarily large inputs.
- expressions/ — shared runtime utilities (constraint_expressions,
column_patterns, _schema_structs). Per-feature expression modules
live under expressions/overture/ and are added by the codegen in
a follow-up commit.
- tests/_support/ — conformance test infrastructure (scenarios,
harness, helpers, mutations). The harness builds one DataFrame
per feature, applies all scenarios as deterministic-UUID-tagged
rows, runs validation once, and indexes violations back to
scenario IDs — O(checks) rather than O(checks * scenarios).
CLI filtering options:
--theme <theme> limit to one theme
--feature <feature> limit to one feature type
--skip-schema-check run only constraint checks (no schema
comparison)
--count-only print violation counts per check rather
than rows
--suppress <key> suppress specific (feature, field, check)
triples per a YAML config
Codegen pipeline (overture-schema-codegen/src/.../pyspark/):
FeatureSpec
|
constraint_dispatch.py map constraints to descriptors
|
check_builder.py walk FieldSpec -> CheckNode IR;
resolve array nesting, variant gating
|
schema_builder.py FieldSpec -> SchemaField list
(StructType source)
|
renderer.py CheckNode -> per-feature expression
module
test_renderer.py CheckNode -> per-feature conformance
test module
synthetic.py FeatureSpec -> BASE_ROW + invalid values
|
pipeline.py orchestrate, return GeneratedModule list
The dispatch tables map every supported constraint (Ge/Gt/Le/Lt/
Interval, MinLen/MaxLen, StrippedConstraint, PatternConstraint,
UniqueItemsConstraint, GeometryTypeConstraint, JsonPointerConstraint,
RequireAnyOfConstraint, RadioGroupConstraint, RequireIfConstraint,
ForbidIfConstraint, MinFieldsSetConstraint), NewType (Country-
CodeAlpha2, LinearlyReferencedRange, RegionCode), and base type
(HttpUrl, EmailStr) to constraint_expressions check functions.
Discriminated unions (segment is the canonical hard case) split
into per-arm test files. The codegen handles arm splitting via
generate_arm_rows in synthetic.py and _filter_field_nodes_for_arm
in test_renderer.py.
Cross-package touch-ups:
- transportation models: minor tweak.
The Makefile gains a `generate-pyspark` target and gates `check`
on it so a stale generation surfaces immediately. The CLI is exposed
as a `[project.scripts]` entry point so `overture-validate`
becomes available after `pip install` / `uv sync`.
Signed-off-by: Seth Fitzsimmons <seth@mojodna.net>
Generate PySpark expressions (and tests) for models defined in the workspace
PySpark 3.4 (the declared floor) doesn't run on Java 21, the default JDK on ubuntu-latest runners -- it hits NoSuchMethodException on java.nio.DirectByteBuffer.<init>(long, int), removed in JDK 21. Pin the lowest-direct cell to Java 17 so the resolved pyspark==3.4.0 can actually start. The default cell (which resolves to a current pyspark 4.x) keeps the runner's default Java 21. Signed-off-by: Seth Fitzsimmons <seth@mojodna.net>
🗺️ Schema reference docs preview is live!
Note ♻️ This preview updates automatically with each push to this PR. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes #517.
Summary
Adds a new runtime package (
overture-schema-pyspark) plus a new output target inoverture-schema-codegenthat emits PySpark validation expressions and conformance tests from the same Pydantic models that define the schema. Ships anoverture-validateCLI for running the validation against Parquet on disk or in S3.PySpark plugs in as a peer of the existing Markdown output target: same
FeatureSpecextraction, same four-layer architecture (Discovery -> Extraction -> Output Layout -> Rendering), new pipeline module. Seepackages/overture-schema-codegen/docs/design.mdfor the full picture; the "PySpark Pipeline" section there covers the new stages in detail.What's in the PR
packages/overture-schema-pyspark/-- runtime. Public API invalidate.py(validate_feature,explain_errors), schema comparison inschema_check.py, dataclasses incheck.py, theoverture-validateCLI incli.py, and shared expression building blocks inexpressions/{constraint_expressions,column_patterns,_schema_structs}.py. The per-feature expression modules underexpressions/generated/overture/schema/<theme>/<feature>.pyand per-feature conformance tests undertests/generated/overture/schema/<theme>/test_<feature>.pyare emitted by codegen and confined to agenerated/boundary thatmake generate-pysparkwipes and recreates._registry.pywalks that tree at import time and exposesREGISTRY: dict[str, FeatureValidation]keyed by feature type name.packages/overture-schema-codegen/src/overture/schema/codegen/pyspark/-- new output target. Pipeline stages:make generate-pysparkwipes bothgenerated/trees and recreates them;make checkgates on regeneration being current.What's covered
Every constraint Pydantic enforces today is dispatched to a PySpark expression:
Ge/Gt/Le/Lt/Interval,MinLen/MaxLen(both array and string variants),StrippedConstraint,PatternConstraint,UniqueItemsConstraint,GeometryTypeConstraint,JsonPointerConstraint.CountryCodeAlpha2,RegionCode,LinearlyReferencedRange(length / bounds / order).HttpUrl(format + length),EmailStr,BBox(completeness, lat ordering, lat range).RequireAnyOfConstraint,RadioGroupConstraint,RequireIfConstraint,ForbidIfConstraint,MinFieldsSetConstraint.NoExtraFieldsConstraintis intentionally skipped.Nested arrays, structs inside arrays, variant-gated fields (discriminated unions), and nested unions (a union field within a union member) all translate into matching
array_check/nested_array_checkchains with discriminator gating.segmentis the canonical hard case -- it produces three test files (test_segment_road.py,test_segment_rail.py,test_segment_water.py), one per arm.Known semantic gaps
Two documented divergences from Pydantic, both with
xfail'd conformance tests:UniqueItemsConstraintuses Spark'sarray_distinct, which compares whole elements with structural equality on raw stored values. Pydantic compares normalized Python objects -- e.g.,list[HttpUrl]is compared after URL normalization. The PySpark check catches exact duplicates only.require_any_ofchecksisNotNullas a proxy for Pydantic'smodel_fields_set. Parquet has no equivalent of "explicitly provided";isNotNullis stricter (it rejects fields explicitly set to null).CLI
Output is one row per violation: feature ID, theme/type, failing field, check name, message, offending value. Single-pass evaluation -- one DataFrame, one Spark job. Switches:
--count-only,--head N,--suppress FIELD[:CHECK],--skip-schema-check,--ignore-columns,--skip-extra-columns,--conf KEY=VALUE.Testing
make checkruns the full suite including generated conformance tests. The conformance tests are the gate: when codegen changes produce different expressions, regenerated tests fail until expectations are also regenerated, so the two surfaces cannot silently drift.Beyond the unit and conformance tests:
Notes for review
pyspark/{constraint_dispatch,check_builder,schema_builder,renderer,test_renderer,pipeline}.pyplus the runtime inoverture-schema-pyspark/src/overture/schema/pyspark/. Everything undergenerated/is regenerable output -- review the codegen, not the output.VehicleSelectorBaseextraction,list_anchor_depthonConstraintSource, Java 17 CI pin) were prerequisites for this work and are included here rather than split out.