Skip to content

KKT Transform#3860

Open
ChrisLaliwala wants to merge 25 commits intoPyomo:mainfrom
ChrisLaliwala:kkt-transform
Open

KKT Transform#3860
ChrisLaliwala wants to merge 25 commits intoPyomo:mainfrom
ChrisLaliwala:kkt-transform

Conversation

@ChrisLaliwala
Copy link

Fixes # .

Summary/Motivation:

This PR creates a Pyomo transformation for reformulating models using the KKT conditions.

Changes proposed in this PR:

Legal Acknowledgement

By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

  1. I agree my contributions are submitted under the BSD license.
  2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.

@ChrisLaliwala
Copy link
Author

Need to add a couple of more test before PR is ready to be reviewed. Will be done in 1-2 days.

ChrisLaliwala and others added 3 commits February 25, 2026 12:29
Co-authored-by: Bethany Nicholson <blnicho@users.noreply.github.com>
@ChrisLaliwala ChrisLaliwala marked this pull request as ready for review February 25, 2026 17:58
@blnicho blnicho requested a review from emma58 March 3, 2026 19:49
@codecov
Copy link

codecov bot commented Mar 6, 2026

Codecov Report

❌ Patch coverage is 97.20670% with 5 lines in your changes missing coverage. Please review.
✅ Project coverage is 89.69%. Comparing base (749e3ac) to head (d62f32c).

Files with missing lines Patch % Lines
pyomo/core/plugins/transform/kkt.py 97.20% 5 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3860      +/-   ##
==========================================
+ Coverage   89.47%   89.69%   +0.21%     
==========================================
  Files         908      909       +1     
  Lines      106738   106917     +179     
==========================================
+ Hits        95509    95902     +393     
+ Misses      11229    11015     -214     
Flag Coverage Δ
builders 29.06% <13.40%> (-0.03%) ⬇️
default 85.98% <97.20%> (?)
expensive 35.48% <13.40%> (?)
linux 87.16% <97.20%> (-2.02%) ⬇️
linux_other 87.16% <97.20%> (+0.01%) ⬆️
oldsolvers 27.98% <13.40%> (+0.45%) ⬆️
osx 82.50% <97.20%> (+0.02%) ⬆️
win 85.58% <97.20%> (+0.01%) ⬆️
win_other 85.58% <97.20%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@jsiirola jsiirola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really good. I don't think that I found anything that is incorrect, but lots of opportunities to make the code cleaner / more efficient.

One general note: you create a lot of intermediate data structures as sets (usually ComponentSet -- but you never test for membership in the set, and the sets don't look like they are being used for deduplication (only iteration later). In that case it is significantly more efficient to make them lists and not hashed (set or dict) data structures.


The following is a suggestion (that I though would make this a lot simpler / easier to follow), but could probably be argued into deferring for later:

In addition to the notes below, I wonder if the transformation could be significantly simpler / easier to follow if there was a bigger refactor? Right now the transformation

  1. walks the whole model and collects the variables from all constraints (and objectives) and does a fair amount of preliminary error checking (e.g., verifying the completeness of the parametrize_wrt set)
  2. walks the model a second time collecting the constraints and sorts them into internal data structures (equality / inequality constraint lists, with bound information)
  3. loop through the variables and build bound expressions
  4. walk the model and collect objectives to create the Lagrangean
  5. loop through the constraint lists and add them to the Lagrangean
  6. loop through the variable bounds and add them to the Lagrangean
  7. Compute the derivative of the Lagrangean and add the stationarity constraints
  8. loop through the inequality constraint list and define the complementary slackness constraints
  9. loop through the variable bounds and define the complimentary slackness

This is a lot of repeated work (and requires making a lot of somewhat unintuitive internal data structures. I wonder if you can avoid most of these data structures and the complexity by basically reversing the order you do things:

  1. initialize
    • lagrangean = 0
    • var_set = ComponentSet()
    • create the block to hold the KKT system (but don't add it to the model)
    • 2 empty mappings (obj -> dual variable(s); dual variable -> obj)
  2. collect the active Objectives from the model. This should be a single-element list (if not, error).
    • collect the variables form the objective into the var_set
    • add the expression to the Lagrangean
  3. collect the active ComoponentDatas from the model; for each one:
    • collect the variables form the objective into the var_set
    • determine if it is an equality constraint; if yes:
      • get the multiplier, add the term (gamma * (body - ub)) to the Lagrangean
      • add the gamma -> con and con -> gamma to the mappings
    • if no:
      • convert con to lower, body, upper
      • if lower is not None:
        • get a multiplier, add the term (alpha1 * (lower - body)) to the Lagrangean
        • add the dual / expression complimentarity condition to the model (you can use the same ComplimentarityList you were using before)
        • add the alpha1 -> con to the mapping
      • if upper is not None:
        • get a multiplier, add the term (alpha2 * (body - upper)) to the Lagrangean
        • add the dual / expression complimentarity condition to the model
        • add the alpha2 -> con to the mapping
      • add the con -> (alpha, alpha) to the mappings
  4. Now that we have all the variables, do the error checking on parametrize_wrt
  5. Loop through the variables
    • get multipliers and add terms to the Lagrangean and define complimentary slackness (as with inequality constraints above)
  6. Now we have a full Lagrangean and the correct var_set: take the derivative and define the stationarity constraints.

Last note (that should NOT be part of this PR, but we should consider for the future): As the model is likely to go out through the NL writer, we could save on subsequent computation time by the judicious use of Expression objects. This could involve rewriting the primal constraints or making the NL writer smarter about identifying repeated expressions.

for con in model.component_data_objects(
Constraint, descend_into=True, active=True
):
if con.has_lb() and con.has_ub() and (con.lower == con.upper):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit of a nuanced thing, but Pyomo stores the "raw" relational expression from the user in the Constraint. Asking for lower, or body, or upper forces it to "normalize" the constraint with a call to to_bounded_expression. It would be significantly more efficient to only call that once with something like:

lower, body, upper = con.to_bounded_expression()

and then has_lb is equivalent to lower is not None

Testing for equality is also tricky: con.equality() is probably your best bet. It will return True for EqualityExpressions (i.e., constraints created with ==), and for RangedExpressions where lower is upper. That is a bit restrictive, but the problem with testing with == is if the bounds are "not potentially variable" expressions (i.e., expressions of mutable Params), then == will create a new equality expression that will raise an exception in your "if" (evaluating a non-constant expression in a boolean context).

In the end, I would suggest logic something like:

lower, body, upper = con.to_bounded_expression()
if con.equality():
    equality_cons.append(con)
    info.equality_con_to_expr[con] = con.upper - con.body
else:
    if lower is not None:
        inequality_cons.append((con, "lb"))
        info.inequality_con_to_expr.setdefault(con, {})["lb"] = lower - body
    if upper is not None:
        inequality_cons.append((con, "ub"))
        info.inequality_con_to_expr.setdefault(con, {})["ub"] = body - upper

    if lower is not None and upper is not None:
        # we want to keep track of the ranged constraints because the mapping between
        # multipliers and ranged constraints will be a tuple (to indicate bound as well)
        # instead of simply the model object
        info.ranged_constraints.add(con)

(you could, of course save a redundant comparison to None by putting the check for ranged inside one of the previous (lower / upper is not None) tests. But that might be less clear to readers...)

Copy link
Author

@ChrisLaliwala ChrisLaliwala Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback. I have tried to save a redudant comparison to None and added a comment to make it a bit more clear for users:

            lower, body, upper = con.to_bounded_expression()
            if con.equality:
                equality_cons.append(con)
                info.equality_con_to_expr[con] = upper - body
            else:
                if lower is not None:
                    inequality_cons.append((con, "lb"))
                    info.inequality_con_to_expr.setdefault(con, {})["lb"] = lower - body
                if upper is not None:
                    inequality_cons.append((con, "ub"))
                    info.inequality_con_to_expr.setdefault(con, {})["ub"] = body - upper

                    # lower is not None and upper is not None -> ranged constraint
                    if lower is not None:
                        # we want to keep track of the ranged constraints because the mapping between
                        # multipliers and ranged constraints will be a tuple (to indicate bound as well)
                        # instead of simply the model object
                        info.ranged_constraints.add(con)

f"The KKT multiplier: {multiplier_var.name}, does not exist on {model.name}."
)

def get_multiplier_from_constraint(self, model, constraint=None, variable=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if a different / simpler API might be preferable? I am wondering about something like:

def get_multipliers(self, model, component) -> VarData | tuple[VarData, VarData]:

That is, if component is an equality or (simple) inequality, then return the multiplier. If it is a ranged inequality of a bounded variable, then return a tuple of the multipliers for the lower and upper bounds (for the case of variables, one [or I suppose both] could be None).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have simplified the API as per your suggestions.

ChrisLaliwala and others added 8 commits March 17, 2026 10:44
Co-authored-by: John Siirola <jsiirola@users.noreply.github.com>
Co-authored-by: John Siirola <jsiirola@users.noreply.github.com>
Co-authored-by: John Siirola <jsiirola@users.noreply.github.com>
Co-authored-by: John Siirola <jsiirola@users.noreply.github.com>
Co-authored-by: John Siirola <jsiirola@users.noreply.github.com>
@ChrisLaliwala
Copy link
Author

ChrisLaliwala commented Mar 17, 2026

This is really good. I don't think that I found anything that is incorrect, but lots of opportunities to make the code cleaner / more efficient.

One general note: you create a lot of intermediate data structures as sets (usually ComponentSet -- but you never test for membership in the set, and the sets don't look like they are being used for deduplication (only iteration later). In that case it is significantly more efficient to make them lists and not hashed (set or dict) data structures.

The following is a suggestion (that I though would make this a lot simpler / easier to follow), but could probably be argued into deferring for later:

In addition to the notes below, I wonder if the transformation could be significantly simpler / easier to follow if there was a bigger refactor? Right now the transformation

  1. walks the whole model and collects the variables from all constraints (and objectives) and does a fair amount of preliminary error checking (e.g., verifying the completeness of the parametrize_wrt set)
  2. walks the model a second time collecting the constraints and sorts them into internal data structures (equality / inequality constraint lists, with bound information)
  3. loop through the variables and build bound expressions
  4. walk the model and collect objectives to create the Lagrangean
  5. loop through the constraint lists and add them to the Lagrangean
  6. loop through the variable bounds and add them to the Lagrangean
  7. Compute the derivative of the Lagrangean and add the stationarity constraints
  8. loop through the inequality constraint list and define the complementary slackness constraints
  9. loop through the variable bounds and define the complimentary slackness

This is a lot of repeated work (and requires making a lot of somewhat unintuitive internal data structures. I wonder if you can avoid most of these data structures and the complexity by basically reversing the order you do things:

  1. initialize

    • lagrangean = 0
    • var_set = ComponentSet()
    • create the block to hold the KKT system (but don't add it to the model)
    • 2 empty mappings (obj -> dual variable(s); dual variable -> obj)
  2. collect the active Objectives from the model. This should be a single-element list (if not, error).

    • collect the variables form the objective into the var_set
    • add the expression to the Lagrangean
  3. collect the active ComoponentDatas from the model; for each one:

    • collect the variables form the objective into the var_set

    • determine if it is an equality constraint; if yes:

      • get the multiplier, add the term (gamma * (body - ub)) to the Lagrangean
      • add the gamma -> con and con -> gamma to the mappings
    • if no:

      • convert con to lower, body, upper

      • if lower is not None:

        • get a multiplier, add the term (alpha1 * (lower - body)) to the Lagrangean
        • add the dual / expression complimentarity condition to the model (you can use the same ComplimentarityList you were using before)
        • add the alpha1 -> con to the mapping
      • if upper is not None:

        • get a multiplier, add the term (alpha2 * (body - upper)) to the Lagrangean
        • add the dual / expression complimentarity condition to the model
        • add the alpha2 -> con to the mapping
      • add the con -> (alpha, alpha) to the mappings

  4. Now that we have all the variables, do the error checking on parametrize_wrt

  5. Loop through the variables

    • get multipliers and add terms to the Lagrangean and define complimentary slackness (as with inequality constraints above)
  6. Now we have a full Lagrangean and the correct var_set: take the derivative and define the stationarity constraints.

Last note (that should NOT be part of this PR, but we should consider for the future): As the model is likely to go out through the NL writer, we could save on subsequent computation time by the judicious use of Expression objects. This could involve rewriting the primal constraints or making the NL writer smarter about identifying repeated expressions.

Thank you for these comments and suggestions. There is definitely a lot of repeated work, and the transformation can be simplified. Now that I've addressed the other issues, I will open a PR to implement this ASAP.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants