Skip to content

Add planogram_optimization template (CSP, predict-then-optimize, decision-indexed table lookup)#60

Open
chriscoey wants to merge 16 commits intomainfrom
csp-planogram_optimization
Open

Add planogram_optimization template (CSP, predict-then-optimize, decision-indexed table lookup)#60
chriscoey wants to merge 16 commits intomainfrom
csp-planogram_optimization

Conversation

@chriscoey
Copy link
Copy Markdown
Member

@chriscoey chriscoey commented May 8, 2026

What this template adds

v1/planogram_optimization is a CSP that decides integer facing counts per SKU on each shelf to maximize predicted weekly demand under shelf-length capacity and per-category active-SKU bounds. Predicted demand at each candidate facings count comes from a vendored predicted_demand_table.csv (a stand-in for any per-(SKU, k) regression model).

The composition pattern this surfaces is the canonical predict-then-optimize hand-off via an element-style decision-indexed table lookup: the CSP picks Sku.facings, and Sku.realized_demand is bound -- via an implies cascade over PredictedDemand rows -- to the demand_units value at the chosen facings_count. Pure CSP, no bilinearity, no big-M, no SOS2.

Modeling patterns this surfaces

  • Decision-indexed table lookup as implies cascade: the natural relational form where(PredictedDemand.facings_count == Sku.facings).require(Sku.realized_demand == PredictedDemand.demand_units) does not lower today (decision-vs-data equality inside where); pushing the decision-vs-data equality into the predicate of an implies lets the rewriter expand into the canonical per-k form.
  • Active-iff-facings via linear inequalities: Sku.facings >= Sku.active plus Sku.facings <= Sku.max_facings * Sku.active couples a 0/1 indicator to the integer facings decision in pure relational arithmetic, so the per-category cardinality reads as sum(Sku.active).per(Category) and both ICs are re-evaluated by problem.verify().
  • Pre-solve guards for silent-failure modes: _assert_demand_table_complete (the implies cascade leaves realized_demand unconstrained for missing (sku, k) rows; also enforces demand_units == 0 at k == 0 so inactive SKUs cannot collect demand without consuming shelf capacity), _assert_categories_match_skus (both directions -- empty .per(Category) groups silently relax min_skus_active), _assert_shelves_cover_skus (dangling assigned_shelf_id silently exempts capacity), _assert_unique_keys (duplicate entity-keyed rows silently merge).
  • Post-solve table-lookup re-validation: a Python dict lookup confirms (sku_id, facings) -> demand_units in the returned solution matches the input table, defense-in-depth for the implies-bodied lookup that verify() cannot re-evaluate.

Bundled change to demand_forecasting

This PR also flips demand_forecasting/README.md to private: true, matching the gating already in place for the other Predictive (GNN) templates (subscriber_retention, plus the planogram template here, which uses a Predictive-arm placeholder). Single-line front-matter edit, no logic change.

Verification

  • Live solve: OPTIMAL / objective 1656 / MiniZinc / ~1.3s on the bundled data; post-solve check passes.
  • ruff check clean.

chriscoey and others added 12 commits April 29, 2026 00:35
Predict-then-optimise: a CSP picks integer facing counts per SKU subject
to shelf-length capacity and per-category active-SKU bounds, with a
realized-demand objective pinned to a per-(SKU, facing_count) table that
stands in for GNN regression output.

The element-style decision-indexed table lookup is the canonical
predict->CSP hand-off: the lookup is encoded as an `implies` cascade
which the rewriter expands per row of PredictedDemand, yielding one
half-reified linear equality `implies(k = facings_sku, demand_sku =
table[sku, k])` per (sku, k); only the row with k == Sku.facings
activates. No bilinearity, no big-M, no SOS2 -- pure CSP throughout.

Bundled data: 18 SKUs across 4 categories on 4 binding shelves; the
73-row `predicted_demand_table.csv` covers every (sku, k) pair for k in
{0..max_facings}. Live solve returns OPTIMAL=1656 in 0.24s on
MiniZinc, with 16/18 SKUs active and all four shelves near-full.

The GNN training arm is documented in the README as the production
hand-off (mirroring the H&M-style sales-regression pipeline already
proven in `retail_planning`); the runnable script ships only the CSP
arm so the predict->CSP shape can be inspected without the GNN
dependency.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Code:
- Add `problem.verify()` for relational arithmetic ICs (shelf capacity,
  category cardinality)
- Convert anonymous `model.require(...)` calls to named `_ic` variables
  fed into verify() (matches sol_lifecycle / money_laundering style)
- Trim docstring; drop wave-specific "two-pillar showcase" jargon
- Plain banner format matching the wave's other templates
  ("Optimal facings per SKU:" not "=== Optimal facings per SKU ===")
- Hoist all property declarations and solve_for calls before constraints

README:
- Rename "Extending" -> "Customize this template" (33/34 library norm)
- Add "Troubleshooting" section (34/34 library norm)
- Quickstart: use `python -m pip install --upgrade pip` then
  `python -m pip install .` (library norm)
- Add "Template structure" tree section
- Move expected output into Quickstart step 6 (matches wave style)
- Drop "Files" section with stale row counts
- Update sample output to reflect new banner format
- Replace "Predictive arm (out of scope)" subsection with a
  "Customize this template" entry pointing at retail_planning's GNN
  pipeline
- Rename active_on_ic / active_off_ic -> active_implies_facings_ic /
  facings_implies_active_ic. The original names mislabel the half-reified
  pair: both ICs conclude with active=1 (neither is "active off"); the
  new names describe the direction each IC enforces, matching sister
  CSP wave templates' direction-clear naming pattern
- Frontmatter description: "maximise" -> "maximize" for consistency with
  the rest of the v1 library (US spelling) and with the v1/README.md
  index entry
- v1/README.md: add planogram_optimization index entry alphabetically
- README: fix Solve result block format (verified bullet prefix + MiniZinc_nothing)
- README: drop "all four shelves capacity-binding" claim from file manifest -- only bottom is fully binding
- README: rewrite inactive-SKU explanation to credit category cardinality, not bottom-shelf capacity
- README: tighten verify-vs-implies prose ("must NOT be passed", not "silently passes")
- README: drop "headline patterns" framing
- README: rewrite Eye-level priority customize bullet with concrete setup; rewrite Multi-shelf reassignment to spell out integer-decision recipe (solve_for accepts int/cont/bin only)
- README: align inline-code identifiers to script (Sku.facings * Sku.width_cm, Shelf.length_cm)
- README: replace "three integer decisions" with "one decision plus two derived"; drop hardcoded row count
- README: add troubleshooting blocks for AttributeError on relationalai version mismatch and FileNotFoundError on CSV
- README: frontmatter maximize -> maximise (consistent with body)
- README: prepend "the script first prints the formulation" to expected-output prefix so readers aren't surprised
- script: rename DATA_DIR -> data_dir to match wave-1 sister templates
- script: extend Sku.active rationale (exists so per-category cardinality is row-aggregatable)
- script: add comment explaining when .to_schema() works (matching column names) vs per-field form
- script: tighten verify-vs-implies comment ("never pass", not "are omitted")
- script: capitalize "# Concept: Shelf"

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Rename module-level data path to uppercase DATA_DIR to match the
  recent CSP cart templates.
- Bump relationalai pin to 1.1.0 (the cart's current pin per
  product_configurator / synthetic_eligibility_records /
  synthetic_order_lifecycle).
- Refresh the Solve result block in the Expected output so the
  solver string matches the current SolveInfoData repr.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… guards

Replace the half-reified implies pair coupling Sku.active to Sku.facings
with two linear inequalities (Sku.facings >= Sku.active and
Sku.facings <= Sku.max_facings * Sku.active) so the active iff facings
relationship becomes pure relational arithmetic and is re-evaluated by
problem.verify().

Add a pre-solve assertion that predicted_demand_table.csv covers every
(sku_id, k) for k in {0..max_facings} -- without this, a missing row
silently leaves Sku.realized_demand unconstrained and the solver may pick
an arbitrary value. Add a sibling assertion that every Sku.category
appears in categories.csv so the where-clause cardinality joins do not
silently drop SKUs.

Add a post-solve pandas check that confirms (sku_id, facings) ->
demand_units in the returned solution matches the input table -- defense
in depth for the implies-bodied table lookup that verify() cannot
re-evaluate.

Tighten cart conventions (concept-intro comments, per-concept CSV
inlining, US spellings throughout), fix the README relationalai version
mismatch (1.0.14 -> 1.1.0), soften the HiGHS-not-appropriate claim, and
generalize the customisation bullet to any per-(SKU, k) regression model.
Add input-validation helpers covering silent-failure modes the existing
guards did not catch:
- _assert_unique_keys flags duplicate sku_id, shelf_id, or category rows
  that would otherwise collapse into the same entity with conflicting
  property values.
- _assert_categories_match_skus now checks both directions: SKU
  categories must appear in categories.csv (existing), AND categories.csv
  rows must have at least one matching SKU. An extra Category row with
  min_skus_active >= 1 produces an empty `.per(Category)` group and
  silently weakens the cardinality bound, which the previous guard
  missed.
- _assert_shelves_cover_skus rejects dangling assigned_shelf_id values
  that would silently exempt a SKU from the shelf-capacity IC.

Doc and prose fixes:
- Correct the "Boolean is not a valid solve_for type" claim in How it
  works -- the script uses solve_for(..., type="bin") and the customise
  bullet at line 218 already says bin is supported. The actual reason
  Sku.active is Integer 0/1 is that `sum(Sku.facings >= 1).per(Category)`
  is not a valid relational sum.
- Update the expected-output solve time from 0.24s to 1.4s to reflect
  current cold-run reality.
- Add a remediation hint to the post-solve AssertionError directing the
  reader at problem.display() output if the lookup somehow misfires.
- Tighten the categories error to show the row format
  (`<category>,<min>,<max>`).
- Make the GNN customisation bullet concrete on the integration point
  (drop `predicted_demand_table.csv` in vs replace the read_csv call).
- Trim front-matter description to the audience-friendly half; move the
  "element-style decision-indexed table lookup" detail into the body.
- Add a footnote that `MiniZinc_unknown` is the version string MiniZinc
  reports for itself today, not a misconfiguration.
- Fix "Three integer decisions per SKU" docstring claim (only one is a
  free decision; two are pinned by ICs) and refresh the Output: block to
  match what the script actually prints.

Other:
- Drop unused Sku.brand from the base script (was only referenced by
  the optional brand-block contiguity customisation).
- Pin pandas>=2.0 in pyproject.toml to match sister cart templates.
- Replace the stale "pandas anti-join" comment in the post-solve check
  with "Python dict lookup" to reflect the actual implementation.
- Fix the stale UK spelling in v1/README.md's planogram_optimization
  index entry (maximise -> maximize).
Extend _assert_demand_table_complete to also reject rows with
facings_count=0 and demand_units != 0 -- without this, replacement data
can let the objective collect demand from inactive SKUs that consume no
shelf capacity (k=0 must always carry demand_units=0 for the lookup to
make economic sense).

Generalize _assert_unique_keys to accept compound keys and apply it to
predicted_demand_table.csv on (sku_id, facings_count). The existing
set-based completeness check silently collapsed duplicate (sku_id, k)
rows; a compound-key duplicate-row guard now catches that.

Doc fixes:
- Remove the incorrect "Per-SKU minimum facings" customise advice that
  said to remove disallowed (sku, k) pairs from PredictedDemand --
  doing so would now trip the completeness guard. Keep the
  min_facings * active formulation.
- Rewrite the "Multi-shelf reassignment" customise bullet. The previous
  text suggested `where(Shelf.id == Sku.shelf_id)` with shelf_id as a
  decision, which is the same decision-vs-data-equality limitation the
  template already documents. Replace with a one-hot binary
  Sku.assigned[Shelf] formulation as the principled rewrite.
- Replace remaining "pandas anti-join" mentions in the README with
  "Python dict lookup" to match the actual implementation.
- Fix a missed UK spelling ("realised-demand" -> "realized-demand").
…ullets

The previous multi-shelf bullet's `Sku.facings <= Sku.max_facings *
Sku.assigned` per-pair constraint zeroed out the SKU-global facings
decision on every unassigned shelf, making any SKU with more than one
candidate shelf unallocatable. Replace with a per-(SKU, Shelf) facing
decision (SkuShelf.facings) and rewrite capacity over those.

The eye-level priority bullet showed `model.require(...)` without the
matching `problem.satisfy(...)` call, so the IC would not feed the
solver. Add the satisfy() call and clarify that the constraint
validates static shelf assignment in the bundled model rather than
reassigning premium SKUs (reassignment requires the multi-shelf
formulation).
@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 8, 2026

The docs preview for this pull request has been deployed to Vercel!

✅ Preview: https://relationalai-docs-k42ggxgoj-relationalai.vercel.app/build/templates
🔍 Inspect: https://vercel.com/relationalai/relationalai-docs/6kQ1QnfhdiqCY9vanjPJE22XjMGE

@chriscoey chriscoey marked this pull request as ready for review May 8, 2026 16:29
Mirrors the pattern set by templates#55: templates with a Predictive
reasoner carry `private: true` in the README front-matter so the docs
site build filters them out of the public gallery (PRIVATE=true npm
run build for the private site, plain npm run build for public).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds the private: true flag that was missed when the template
landed in #49. Aligns with the convention set by templates#55:
templates whose reasoning_types include Predictive carry
`private: true` so the docs site filters them out of the public
gallery (PRIVATE=true npm run build for the private site).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Align with the existing synthetic_eligibility_records phrasing for
the same concept ("implies-bodied ICs are solver-only and verify()
returns silently-OK"). "Wire-format constraint relations" is
internal compiler terminology.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Collaborator

@cafzal cafzal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ship with nits. Defense-in-depth on silent-failure modes is exemplary — _assert_demand_table_complete (planogram_optimization.py:69-98) checks both completeness over (sku_id, k in 0..max_facings) AND k=0 -> demand_units=0; post-solve dict re-validation (:326-348) catches what verify() is told to skip. Rationale is in-source (:62-68, :236-240), not hand-waved. Active-iff-facings via two linear inequalities — no big-M, no SOS2, both ICs verifiable. Bundled data exercises both ICs non-trivially: candy and household_paper each have 4 SKUs vs max_skus_active=3 (cardinality binds); bottom shelf 90/90cm binds capacity; objective 1656 reproduces. The implies-cascade-as-canonical-predict-then-optimize-handoff is a genuinely new pattern in v1 portfolio, with the customization bullet at README.md:209 making the generalization concrete (workforce / ad-spend / line-speed).

Issues (all NITs)

  • README.md:25 — convention is to bold the reasoning type in "What this template is for" (v1/diet/README.md:21 does **Prescriptive**). This template bolds **element-style decision-indexed table lookup** instead. Add a phrase like "uses Predictive + Prescriptive reasoning" near the top.
  • planogram_optimization.py:35-41 — no # Configure inputs header before DATA_DIR; jumps straight to "Pre-solve data invariants". Minor checklist drift from canonical block ordering.
  • data/skus.csvbrand column is loaded but never modeled (only forward-referenced for the contiguity customization at README.md:223). Add a one-line note in "What's included" or drop until used.
  • README.md:50-53 — only predicted_demand_table.csv gets a data-invariant sentence in "What's included". Add equivalent for skus/categories/shelves (unique key, FK requirements).
  • README.md:209 — the "Replace the vendored table" customization is a single 7-sentence wall covering motivation + model options + structural requirement + inline-inference. Split after "...so the model can be queried at every k."

Side-effect check: v1/demand_forecasting/README.md is a clean single-line private: true add at line 5; no logic touched. v1/README.md only adds the planogram row. Scoped correctly.

py_compile and ruff check clean.

- README: bold the Predictive + Prescriptive reasoning phrase in the
  template overview, matching the diet-template convention.
- README: add per-CSV data invariants (key uniqueness, FK targets) to
  the "What's included" block, and note that the "brand" column on
  skus.csv is reserved for the brand-block contiguity customization.
- README: split the "Replace the vendored table" customization into
  two paragraphs (motivation/structural requirement, then operational
  swap-in / inline inference) for readability.
- planogram_optimization.py: add the canonical "Configure inputs"
  section header before DATA_DIR, matching diet.py and
  factory_production.py.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants