Skip to content

Conversation

@smondal13
Copy link
Contributor

@smondal13 smondal13 commented Dec 16, 2025

Fixes

Summary/Motivation:

In the old definition of A-optimality (we will call it pseudo A-optimality from now on), DoE computed the trace of the Fisher Information Matrix (FIM), as few studies used that definition. However, that is not correct, and new literature mentions the trace of the inverse of the FIM as A-optimality. This PR has adopted this new definition of A-optimality in DoE.

Changes proposed in this PR:

  • Change the old A-optimality to pseudo A-optimality
  • Implement a new definition of A-optimality (=trace(inverse of FIM))

Legal Acknowledgement

By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

  1. I agree my contributions are submitted under the BSD license.
  2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.

@smondal13
Copy link
Contributor Author

@adowling2 @blnicho This PR is ready for initial review.

smondal13 and others added 2 commits December 18, 2025 15:12
Replace `model` with `m` in `cholesky_LLinv_imp()`

Co-authored-by: Bethany Nicholson <blnicho@users.noreply.github.com>
Copy link
Member

@blnicho blnicho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smondal13 thanks for addressing my initial review comments. I think these changes look reasonable so far but would like to see tests for the corrected A-optimality before I'll approve the PR.

@blnicho blnicho self-requested a review December 18, 2025 21:24
@codecov
Copy link

codecov bot commented Jan 12, 2026

Codecov Report

❌ Patch coverage is 66.21622% with 75 lines in your changes missing coverage. Please review.
✅ Project coverage is 89.46%. Comparing base (168e961) to head (a362b30).
⚠️ Report is 21 commits behind head on main.

Files with missing lines Patch % Lines
pyomo/contrib/doe/doe.py 68.54% 39 Missing ⚠️
...rib/parmest/examples/rooney_biegler/doe_example.py 44.82% 32 Missing ⚠️
...les/rooney_biegler/parameter_estimation_example.py 60.00% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3803      +/-   ##
==========================================
+ Coverage   89.41%   89.46%   +0.04%     
==========================================
  Files         909      907       -2     
  Lines      105579   105621      +42     
==========================================
+ Hits        94408    94493      +85     
+ Misses      11171    11128      -43     
Flag Coverage Δ
builders 29.11% <17.56%> (-0.01%) ⬇️
default 86.08% <66.21%> (?)
expensive 35.72% <18.01%> (?)
linux 86.77% <66.21%> (-2.42%) ⬇️
linux_other 86.77% <66.21%> (+0.03%) ⬆️
osx 82.95% <66.21%> (+0.07%) ⬆️
win 85.04% <66.21%> (+0.05%) ⬆️
win_other 85.04% <66.21%> (+0.05%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@smondal13 smondal13 moved this from Ready for design review to Ready for final review in ParmEst & Pyomo.DoE Development Jan 15, 2026
Copy link
Member

@adowling2 adowling2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Started to leave review comments. I will add more comments soon.



class ObjectiveLib(Enum):
determinant = "determinant"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add the following comments after each line

# det(FIM), D-optimality
# trace(inv(FIM)), A-optimalty
# trace(FIM), "psuedo"-A-optimality
# min(eig(FIM)), E-optimality
# cond(FIM), ME-optimality
# constant zero, useful for initialization and debugging

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added comments

Copy link
Member

@adowling2 adowling2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a computational perspective, pseudo-A-optimality, i.e., trace(FIM), is the easiest objective to solve. I suggest we do a majority of the testing of permutations of DoE options with pseudo-A-optimality. I suggest that we then do a fewer number of tests with D-optimality (both Cholseky and GreyBox) and A-optimality (both Cholseky and GreyBox).

@blnicho blnicho moved this from Todo to Review In Progress in Pyomo 6.10 Jan 21, 2026
Comment on lines 51 to 68
rooney_biegler_doe = DesignOfExperiments(
experiment=rooney_biegler_experiment,
objective_option="determinant",
tee=tee,
prior_FIM=FIM,
improve_cholesky_roundoff_error=True,
)

if optimize_experiment_D:
# D-Optimality
rooney_biegler_doe_D = DesignOfExperiments(
experiment=rooney_biegler_experiment,
objective_option="determinant",
tee=tee,
prior_FIM=FIM,
improve_cholesky_roundoff_error=True,
)
rooney_biegler_doe_D.run_doe()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need both rooney_biegler_doe and rooney_biegler_doe_D? Seems like they are identical instances of the DesignOfExperiments class. Why not just call run_doe directly on rooney_biegler_doe when optimize_experiment_D is True?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was repeated. It's corrected now.

Comment on lines 110 to 111
opt_D = results_container["optimization"]["D"]
opt_A = results_container["optimization"]["A"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will fail if either of optimize_experiment_A or optimize_experiment_D are set to False. You should either guard this code against different options sent to the run_rooney_biegler_doe function or remove the options and have this example always run DoE D/A optimization.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have removed the plotting function. It is to visually confirm the result. It is not required.

Comment on lines 574 to 579
A_opt_value = run_rooney_biegler_doe(optimize_experiment_A=True)[
"optimization"
]["A"]["value"]
A_opt_design_value = run_rooney_biegler_doe(optimize_experiment_A=True)[
"optimization"
]["A"]["design"][0]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling run_rooney_biegler_doe twice here seems inefficient. Why not save the return value in a local variable? Then you can extract any of the values in the saved results

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it was inefficient. I have saved the return as a variable and reused it.

Comment on lines 589 to 590
@unittest.skipUnless(pandas_available, "test requires pandas")
@unittest.skipUnless(matplotlib_available, "test requires matplotlib")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The run_rooney_biegler_doe function has an option to plot or not. So why not have this test change the value of plot_optimal_design depending on the value of matplotlib_available? Then you can still test most of the example even when matplotlib isn't available.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I have done that.

Comment on lines 598 to 599
# Switch backend to 'Agg' to prevent GUI windows from popping up
plt.switch_backend('Agg')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still needed? Didn't you set the backend to 'Agg` up on L28?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed, I have removed that

fd_method = "central"
obj_used = "trace"

experiment = run_rooney_biegler_doe()["experiment"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you rework this to use FullReactorExperiment like the other tests in this file?

Copy link
Contributor Author

@smondal13 smondal13 Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we talked about using inexpensive tests during the IDAES summit in Pittsburgh, I have added simpler rooney_biegler example here for the test instead of FullReactorExperiment, which is more complex and sometimes fails to converge. Eventually, we want to make all tests inexpensive (or comparatively inexpensive) for Pyomo.DoE. However, it does not mean all tests that use FullReactorExperiment are expensive. I can add a test using FullReactorExperiment. Let me know whether I should add FullReactorExperiment.

params = list(model.parameter_names)

# Check cov_trace initialization
cov_trace_from_fim_inv = sum(pyo.value(model.fim_inv[j, j]) for j in params)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The expected value in a test should be a hard-coded number not a repeat of the implementation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A hard-coded value is added.

Comment on lines 610 to 614
val = sum(
pyo.value(model.L[c, params[k]])
* pyo.value(model.L_inv[params[k], d])
for k in range(len(params))
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
val = sum(
pyo.value(model.L[c, params[k]])
* pyo.value(model.L_inv[params[k], d])
for k in range(len(params))
)
val = pyo.value(sum(model.L[c, params[k]] * model.L_inv[params[k], d]
for k in range(len(params))
))

I'm pretty sure you can simplify this slightly by just calling pyo.value once. (Note: if you accept this change through the GitHub GUI the line indentation is off)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added manually.

@mrmundt
Copy link
Contributor

mrmundt commented Jan 21, 2026

@smondal13 - Gurobi just released a new version that supports Python 3.14, and we are seeing failures for our gurobi tests because of it. I cancelled your jobs because I have a fix already, just need to get it in.

TL;DR - if you see 3.14 failures in gurobi, not your fault, we're on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Ready for final review
Status: Review In Progress

Development

Successfully merging this pull request may close these issues.

6 participants