Add flow cover cuts at root#1178
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds flow-cover cut support to the MIP solver (new cut type, settings, generation logic, integration into cut pipeline, and tests) and extends the LP benchmark script with recursive model discovery and a cut-mode flag to control cut-related CLI options. ChangesFlow Cover Cut Implementation & Integration
Benchmark CLI: recursive discovery & cut-mode
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@benchmarks/linear_programming/run_mps_files.sh`:
- Around line 359-364: The recursive file collection can produce duplicate
basenames that cause log files to be overwritten; update the code that
constructs per-model log filenames to include a path-unique component (e.g., use
the file's path relative to MPS_DIR with slashes replaced by safe separators or
use dirname+basename) so each entry in mps_files yields a unique log name when
RECURSIVE=true and --write-log-file is enabled; locate where run_mps_files.sh
iterates over the mps_files array and replace uses of basename (or "$(basename
"$file").log") with a deterministic path-derived filename (for example
"${file#$MPS_DIR/}" with non-alphanumerics normalized) to avoid clobbering logs.
In `@cpp/src/cuts/cuts.cpp`:
- Around line 951-953: The hardcoded thresholds tol and bound_tol in cuts.cpp
(const f_t tol = 1e-6; const f_t bound_tol = 1e-8;) should be replaced with the
solver's configured numerical tolerances; find where these are used (flow-cover
acceptance and downstream cut cutoff computations) and read the solver/LP
feasibility and bound tolerances instead of the literals (e.g., use the
simplex/LP object's feasibility epsilon and bound epsilon parameters). Update
all dependent comparisons that currently use tol or bound_tol so they derive
from those solver tolerances (and propagate scaling if needed), and add a short
comment noting these come from the solver's tolerance settings rather than being
hardcoded.
- Around line 3881-3888: The code assumes slack_map_[i] is valid and indexes
lp.lower/upper unconditionally; when slack_map_[i] == -1 (rows with
slack_count==0) this causes out-of-bounds access. Fix by guarding the lookup:
check if slack_map_[i] >= 0 before accessing lp.lower[slack_map_[i]] and
lp.upper[slack_map_[i]]; if negative, set slack_lower/slack_upper (and derived
sigma_slack_lower/sigma_slack_upper) to safe defaults (e.g., 0) or compute them
in a way consistent with slack_coeff_i==0 so the rest of the logic remains
correct. Apply the same guarded check and handling in the mirrored block around
lines 3981-3988 as well.
In `@cpp/tests/mip/cuts_test.cu`:
- Around line 1482-1498: The test utilities access fixed indices without
validating vector size; update single_node_flow_fractional_solution to check
num_cols >= 6 (or clamp/resize/throw) before writing xstar[0]..xstar[5], and
update single_node_flow_y_feasible to verify y.size() >= 3 before reading y[0],
y[1], y[2] (or return false/assert/throw with a clear message). Use the function
names single_node_flow_fractional_solution and single_node_flow_y_feasible and
include a clear failure mode (assert or exception) so an input-shape mismatch
yields a deterministic test failure rather than undefined behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 21e0b82f-501c-4310-a5c3-f4986b9a676f
📒 Files selected for processing (9)
benchmarks/linear_programming/run_mps_files.shcpp/include/cuopt/linear_programming/constants.hcpp/include/cuopt/linear_programming/mip/solver_settings.hppcpp/src/cuts/cuts.cppcpp/src/cuts/cuts.hppcpp/src/dual_simplex/simplex_solver_settings.hppcpp/src/math_optimization/solver_settings.cucpp/src/mip_heuristics/solver.cucpp/tests/mip/cuts_test.cu
| if [ "$RECURSIVE" = true ]; then | ||
| mapfile -t mps_files < <(find "$MPS_DIR" -type f \( -name "*.mps" -o -name "*.MPS" -o -name "*.SIF" \) | sort) | ||
| else | ||
| # Gather .mps/.MPS and .SIF files in the directory | ||
| mapfile -t mps_files < <(ls "$MPS_DIR"/*.mps "$MPS_DIR"/*.MPS "$MPS_DIR"/*.SIF 2>/dev/null) | ||
| fi |
There was a problem hiding this comment.
Recursive mode can clobber per-model log files.
With --recursive, two models from different subdirectories can share the same basename, but the worker still writes logs to $(basename ...).log. The later run overwrites the earlier one, so benchmark results become incomplete whenever --write-log-file is enabled.
Suggested fix
- if [ "$WRITE_LOG_FILE" = true ]; then
- args="$args --log-file $OUTPUT_DIR/$(basename "${mps_file%.mps}").log"
- fi
+ if [ "$WRITE_LOG_FILE" = true ]; then
+ rel_model_path="${mps_file#$MPS_DIR/}"
+ safe_model_name="${rel_model_path//\//__}"
+ safe_model_name="${safe_model_name%.mps}"
+ safe_model_name="${safe_model_name%.MPS}"
+ safe_model_name="${safe_model_name%.SIF}"
+ args="$args --log-file $OUTPUT_DIR/${safe_model_name}.log"
+ fi🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@benchmarks/linear_programming/run_mps_files.sh` around lines 359 - 364, The
recursive file collection can produce duplicate basenames that cause log files
to be overwritten; update the code that constructs per-model log filenames to
include a path-unique component (e.g., use the file's path relative to MPS_DIR
with slashes replaced by safe separators or use dirname+basename) so each entry
in mps_files yields a unique log name when RECURSIVE=true and --write-log-file
is enabled; locate where run_mps_files.sh iterates over the mps_files array and
replace uses of basename (or "$(basename "$file").log") with a deterministic
path-derived filename (for example "${file#$MPS_DIR/}" with non-alphanumerics
normalized) to avoid clobbering logs.
| const f_t tol = 1e-6; | ||
| const f_t bound_tol = 1e-8; | ||
|
|
There was a problem hiding this comment.
Use solver tolerances here instead of fixed epsilons.
This separator seeds its own 1e-6 / 1e-8 thresholds and then builds more hardcoded cutoffs on top of them. That makes flow-cover acceptance differ from the rest of the simplex tolerances on scaled or degenerate models, which can suppress otherwise valid cuts or admit numerically fragile ones.
As per coding guidelines, flag hardcoded tolerances that fail on degenerate problems and missing documentation for numerical tolerances and algorithm parameters.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@cpp/src/cuts/cuts.cpp` around lines 951 - 953, The hardcoded thresholds tol
and bound_tol in cuts.cpp (const f_t tol = 1e-6; const f_t bound_tol = 1e-8;)
should be replaced with the solver's configured numerical tolerances; find where
these are used (flow-cover acceptance and downstream cut cutoff computations)
and read the solver/LP feasibility and bound tolerances instead of the literals
(e.g., use the simplex/LP object's feasibility epsilon and bound epsilon
parameters). Update all dependent comparisons that currently use tol or
bound_tol so they derive from those solver tolerances (and propagate scaling if
needed), and add a short comment noting these come from the solver's tolerance
settings rather than being hardcoded.
| const f_t a_ij = lp.A.x[p]; | ||
| const f_t slack_lower = lp.lower[slack_map_[i]]; | ||
| const f_t slack_upper = lp.upper[slack_map_[i]]; | ||
| const f_t slack_coeff_i = slack_coeff[i]; | ||
| const f_t sigma_slack_lower = | ||
| slack_coeff_i > 0.0 ? slack_coeff_i * slack_lower : slack_coeff_i * slack_upper; | ||
| const f_t sigma_slack_upper = | ||
| slack_coeff_i > 0.0 ? slack_coeff_i * slack_upper : slack_coeff_i * slack_lower; |
There was a problem hiding this comment.
Guard the slack lookup before indexing bounds.
flow_cover_constraints_ now explicitly admits rows with slack_count == 0, but both bound-building passes still do lp.lower[slack_map_[i]] / lp.upper[slack_map_[i]] unconditionally. If one of those accepted equality rows contains a continuous variable, slack_map_[i] stays -1 here and this becomes out-of-bounds access during preprocessing.
As per coding guidelines, flag invalid memory access patterns including out-of-bounds access.
Also applies to: 3981-3988
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@cpp/src/cuts/cuts.cpp` around lines 3881 - 3888, The code assumes
slack_map_[i] is valid and indexes lp.lower/upper unconditionally; when
slack_map_[i] == -1 (rows with slack_count==0) this causes out-of-bounds access.
Fix by guarding the lookup: check if slack_map_[i] >= 0 before accessing
lp.lower[slack_map_[i]] and lp.upper[slack_map_[i]]; if negative, set
slack_lower/slack_upper (and derived sigma_slack_lower/sigma_slack_upper) to
safe defaults (e.g., 0) or compute them in a way consistent with
slack_coeff_i==0 so the rest of the logic remains correct. Apply the same
guarded check and handling in the mirrored block around lines 3981-3988 as well.
| std::vector<double> single_node_flow_fractional_solution(int num_cols) | ||
| { | ||
| std::vector<double> xstar(num_cols, 0.0); | ||
| xstar[0] = 1.0; | ||
| xstar[1] = 2.0 / 3.0; | ||
| xstar[2] = 1.0; | ||
| xstar[3] = 3.0; | ||
| xstar[4] = 4.0; | ||
| xstar[5] = 3.0; | ||
| return xstar; | ||
| } | ||
|
|
||
| bool single_node_flow_y_feasible(const std::vector<double>& y) | ||
| { | ||
| const double activity = y[0] + y[1] - y[2]; | ||
| return activity <= 4.0 + 1e-8; | ||
| } |
There was a problem hiding this comment.
Guard fixed-index writes/reads to avoid UB if LP column layout changes.
Line 1485–1490 and Line 1496 assume minimum vector sizes (6 and 3) without checks. If convert_user_problem output shape changes, this can become out-of-bounds and make the test crash/flap instead of failing with a clear assertion.
Suggested patch
std::vector<double> single_node_flow_fractional_solution(int num_cols)
{
+ cuopt_assert(num_cols >= 6, "Expected at least 6 columns for x0..x2,y0..y2");
std::vector<double> xstar(num_cols, 0.0);
xstar[0] = 1.0;
xstar[1] = 2.0 / 3.0;
xstar[2] = 1.0;
xstar[3] = 3.0;
xstar[4] = 4.0;
xstar[5] = 3.0;
return xstar;
}
bool single_node_flow_y_feasible(const std::vector<double>& y)
{
+ cuopt_assert(y.size() == 3, "Expected exactly 3 flow variables");
const double activity = y[0] + y[1] - y[2];
return activity <= 4.0 + 1e-8;
}As per coding guidelines, **/*.{cpp,cuh,cu,hpp,py}: Flag missing input validation at library and server boundaries.
Also applies to: 1579-1581
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@cpp/tests/mip/cuts_test.cu` around lines 1482 - 1498, The test utilities
access fixed indices without validating vector size; update
single_node_flow_fractional_solution to check num_cols >= 6 (or
clamp/resize/throw) before writing xstar[0]..xstar[5], and update
single_node_flow_y_feasible to verify y.size() >= 3 before reading y[0], y[1],
y[2] (or return false/assert/throw with a clear message). Use the function names
single_node_flow_fractional_solution and single_node_flow_y_feasible and include
a clear failure mode (assert or exception) so an input-shape mismatch yields a
deterministic test failure rather than undefined behavior.
Closes: #1065