Skip to content

Conversation

@joeyshuttleworth
Copy link
Collaborator

@joeyshuttleworth joeyshuttleworth commented Apr 28, 2023

Description

Add models 1-11 and 30 from CardiacModelling/hergModels/tree/master/models_myokit. These models are tested by comparing Myokit Simulation output form the original .mmt files with those generated through the Markov_builder.

The newly added models make extensive use of the shared_variables_dict. The behaviour of parameterise_rates was modified to make it more convenient to set these variables to numerical values.

These changes exposed some issues with reversibility checking and transition matrix calculation which have now been fixed.

The new models can be easily generated by users and modified to include drug-trapping or additional states/parameters.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Testing

  • Testing is done automatically and codecov shows test coverage
  • This cannot be tested automatically

Documentation checklist

  • I have updated all documentation in the code where necessary.
  • I have checked spelling in all (new) comments and documentation.
  • I have added a note to RELEASE.md if relevant (new feature, breaking change, or notable bug fix).

@codecov
Copy link

codecov bot commented May 3, 2023

Codecov Report

❌ Patch coverage is 97.91667% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 93.30%. Comparing base (5672637) to head (93b454f).
⚠️ Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
markov_builder/MarkovChain.py 92.04% 7 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master      #13      +/-   ##
==========================================
+ Coverage   89.71%   93.30%   +3.59%     
==========================================
  Files           5       22      +17     
  Lines         554      837     +283     
==========================================
+ Hits          497      781     +284     
+ Misses         57       56       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@MichaelClerx MichaelClerx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Joey,
Won't have time for an in-depth review (if you want that) till June or after.
Few small comments. And comparison 2 looks plain wrong?
Would plot an error for these 3 cases too (signal1 - signal2)

@joeyshuttleworth joeyshuttleworth changed the title Fix transition matrix Add thirty models Jun 13, 2023
Joseph G. Shuttleworth added 2 commits December 16, 2025 19:02
@joeyshuttleworth
Copy link
Collaborator Author

All 30 models have been added now. Some fixes have been applied to some of the .mmt files where it looks like there were errors

@mirams
Copy link
Member

mirams commented Dec 17, 2025

All 30 models have been added now. Some fixes have been applied to some of the .mmt files where it looks like there were errors

You'll have to show me those Joey! And do a pull request on the 30 models repo?

@@ -1,4 +1,4 @@
[[model]]
model]]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo?

'b2': ('p11 * exp(-p12*V)',),
}

open_state = 'O'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

O * Oh ?


class TestThirtyModels(unittest.TestCase):

def setUp(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean setUpClass here?

The difference is that setUp gets called before every test_x method in your class, while setUpClass is called just once

if not os.path.exists(comparison_plot_dir):
os.makedirs(comparison_plot_dir)

for i, model in zip(self.model_indices, self.models):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you wanted, you could get fancy here and use subTest https://docs.python.org/3/library/unittest.html#subtests

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's also https://docs.pytest.org/en/stable/how-to/parametrize.html which seems to do exactly what we need here.

I'm not sure which way to go on this, but I'd definitely like to tidy these up so we can see which models are failing if there's a problem

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but keep in mind pytest and unittest are two seperate packages

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. I'm just trying to weigh up if switching to pytest is worth it for this. I think it is, because you can use the parametrize() decorator to register each model as a separate test, then do pytest --lf to only run those models that failed in the last run. Otherwise, running all the tests can take a few minutes.

self.disconnected_models = [model_03, model_09, model_10, model_19,
model_20, model_26, model_27]

self.model_indices = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

list(range(31))?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, some test that you've (probably) listed the models right.
E.g. self.assertEqual(len(set(model_indices)), 31) and self.assertEqual(len(model_indices), len(models))

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose there's no need for the indices list anymore.

Before I had all 30 models (missing 15-29 or something), so I did this manually.

I think we can drop indices, and do enumerate with list(range(31))` as you suggest

self.output_dir = test_output_dir
logging.info("outputting to " + test_output_dir)

self.models = [model_00, model_01, model_02, model_03, model_04,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[f'model_{i:02}' for i in self.model_indices] ?

f"model {model} is not connected")

def test_reversible(self):
for model in self.models:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could so something slightly nicer here by using e.g. for model in self.models if ..., or using sets e.g. for model in set(self.models) - set(self.disconnected_models)
At least, I'd move the if statement to above the logging statement

Comment on lines 142 to 143
if i is None or model is None:
continue
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Confused by this. You'd want the tests to fall over if they weren't configured right

@MichaelClerx MichaelClerx requested review from MichaelClerx and removed request for MichaelClerx December 17, 2025 17:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants