Fix no results returned when no discrete variables are present in MindtPy#3861
Fix no results returned when no discrete variables are present in MindtPy#3861bernalde wants to merge 23 commits intoPyomo:mainfrom
Conversation
…dtPy, add test case for this bug fixing
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…nto fix/mindtpy-fix
Co-authored-by: Tarik Levent Guler <64302098+tarikLG@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Fix no results returned when no discrete variables are present in Min…
Black the format of pyomo/pyomo/contrib/mindtpy/algorithm_base_class.py
|
Hi team, I’ve investigated the current CI failures across Linux, macOS, and Windows. The failures in pyomo/contrib/solver/tests/solvers are not caused by the code changes in this PR. The root cause is an expired GAMS license in the test environment. It appears the license might have expired a few days ago, which is why we are seeing identical failures across all platforms. Interestingly, the tests for MindtPy are still passing (likely because they use different solver paths or have different fallback mechanisms), but the core solver tests are blocked. Once the GAMS license is renewed in the CI environment, these tests should return to green. |
|
@Toflamus we are aware of the issue and discussed it during the developer call today. This is indeed an infrastructure issue and we're working on getting it fixed. |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3861 +/- ##
==========================================
- Coverage 89.67% 89.67% -0.01%
==========================================
Files 908 908
Lines 106735 106757 +22
==========================================
+ Hits 95717 95734 +17
- Misses 11018 11023 +5
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
jsiirola
left a comment
There was a problem hiding this comment.
One question about results bounds, but otherwise, this looks good to me.
| # explicit bounds, infer them from the objective value. For a direct | ||
| # continuous optimal solve, primal==dual. |
There was a problem hiding this comment.
Doesn't that imply convexity? I feel like you are making a promise that may not hold for the actual model you are solving...
There was a problem hiding this comment.
Good catch — the "primal==dual" phrasing is too strong. What this fallback actually relies on is not convexity but the solver's own claim: if the solver reports tc.optimal and didn't populate its own bounds, we use the objective value as the best available information for both bounds.
That said, you're right that for a local NLP solver like IPOPT, tc.optimal really only guarantees local optimality, so setting both bounds equal overstates what we actually know. Do you want me to fix the comment?
There was a problem hiding this comment.
Absolutely! Good catch. We cannot promise dual==primal, so in general we should inherit the dual and primal from the subsolver if they provide any. If they don't provide a dual (such as IPOPT) we only report the primal
There was a problem hiding this comment.
I'll let others weigh in, but I think I would prefer to change the results to only return what we know; in this case, only return the objective value for the primal (feasible) bound and then return None for the dual (infeasible) bound.
There was a problem hiding this comment.
Btw, just to mention, the "primal==dual" phrasing in the comment is admittedly loose, but the behavior here is safe. A few points:
-
probis the result-reporting object (self.results.problem), not any subproblem bound used algorithmically. This code only runs in the short-circuit path (no discrete variables), so these bounds never feed back into any decomposition logic. It's purely for populating the solver results that get returned to the user. We're just filling in thelower_bound,upper_boundfields so the returnedSolverResultsisn't incomplete. -
This only fires in the short-circuit path — when
model_is_valid()detects zero discrete variables and solves the original model directly as an LP/NLP. MindtPy then returns False and never enters the decomposition loop. So these bounds don't influence any cuts or iterations. -
The fallback only triggers when the solver claims
tc.optimalbut didn't populate its own bounds. We're just mirroring what the solver already told us — if it says optimal, using the objective value as the reported bound is the best available information.
There was a problem hiding this comment.
Anyway, I agree that we should only keep what we know.
Fixes #3855 .
Fixes #3855
Summary/Motivation:
This PR fixes an issue where MindtPy can short-circuit on “no discrete decisions” (LP/NLP) and then fails to reliably return a proper SolverResults and/or load primal values onto the input model, even when the direct LP/NLP solve succeeds. This behavior breaks downstream meta-solvers (e.g., GDPopt subproblem solves) that depend on Var.value to capture an incumbent.
Reference: #3855
MindtPy contains a validation/short-circuit path intended to directly solve models that do not require decomposition (e.g., LP/NLP, or models where all discrete variables are fixed). In this path, MindtPy may:
return None from solve() (bare return)
Changes proposed in this PR:
Legal Acknowledgement
By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution: