-
Notifications
You must be signed in to change notification settings - Fork 2
Add thirty models #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…tch to draw_graph
…arkov-builder into fix_transition_matrix
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #13 +/- ##
==========================================
+ Coverage 89.71% 93.30% +3.59%
==========================================
Files 5 22 +17
Lines 554 837 +283
==========================================
+ Hits 497 781 +284
+ Misses 57 56 -1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Add sundials install to workflow
MichaelClerx
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Joey,
Won't have time for an in-depth review (if you want that) till June or after.
Few small comments. And comparison 2 looks plain wrong?
Would plot an error for these 3 cases too (signal1 - signal2)
…arkov-builder into fix_transition_matrix
|
All 30 models have been added now. Some fixes have been applied to some of the .mmt files where it looks like there were errors |
You'll have to show me those Joey! And do a pull request on the 30 models repo? |
tests/models_myokit/model-30.mmt
Outdated
| @@ -1,4 +1,4 @@ | |||
| [[model]] | |||
| model]] | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo?
| 'b2': ('p11 * exp(-p12*V)',), | ||
| } | ||
|
|
||
| open_state = 'O' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
O * Oh ?
tests/test_30_models.py
Outdated
|
|
||
| class TestThirtyModels(unittest.TestCase): | ||
|
|
||
| def setUp(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean setUpClass here?
The difference is that setUp gets called before every test_x method in your class, while setUpClass is called just once
tests/test_30_models.py
Outdated
| if not os.path.exists(comparison_plot_dir): | ||
| os.makedirs(comparison_plot_dir) | ||
|
|
||
| for i, model in zip(self.model_indices, self.models): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you wanted, you could get fancy here and use subTest https://docs.python.org/3/library/unittest.html#subtests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's also https://docs.pytest.org/en/stable/how-to/parametrize.html which seems to do exactly what we need here.
I'm not sure which way to go on this, but I'd definitely like to tidy these up so we can see which models are failing if there's a problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but keep in mind pytest and unittest are two seperate packages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I'm just trying to weigh up if switching to pytest is worth it for this. I think it is, because you can use the parametrize() decorator to register each model as a separate test, then do pytest --lf to only run those models that failed in the last run. Otherwise, running all the tests can take a few minutes.
tests/test_30_models.py
Outdated
| self.disconnected_models = [model_03, model_09, model_10, model_19, | ||
| model_20, model_26, model_27] | ||
|
|
||
| self.model_indices = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
list(range(31))?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, some test that you've (probably) listed the models right.
E.g. self.assertEqual(len(set(model_indices)), 31) and self.assertEqual(len(model_indices), len(models))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose there's no need for the indices list anymore.
Before I had all 30 models (missing 15-29 or something), so I did this manually.
I think we can drop indices, and do enumerate with list(range(31))` as you suggest
tests/test_30_models.py
Outdated
| self.output_dir = test_output_dir | ||
| logging.info("outputting to " + test_output_dir) | ||
|
|
||
| self.models = [model_00, model_01, model_02, model_03, model_04, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[f'model_{i:02}' for i in self.model_indices] ?
tests/test_30_models.py
Outdated
| f"model {model} is not connected") | ||
|
|
||
| def test_reversible(self): | ||
| for model in self.models: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could so something slightly nicer here by using e.g. for model in self.models if ..., or using sets e.g. for model in set(self.models) - set(self.disconnected_models)
At least, I'd move the if statement to above the logging statement
tests/test_30_models.py
Outdated
| if i is None or model is None: | ||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confused by this. You'd want the tests to fall over if they weren't configured right
Description
Add models 1-11 and 30 from CardiacModelling/hergModels/tree/master/models_myokit. These models are tested by comparing Myokit Simulation output form the original .mmt files with those generated through the Markov_builder.
The newly added models make extensive use of the shared_variables_dict. The behaviour of parameterise_rates was modified to make it more convenient to set these variables to numerical values.
These changes exposed some issues with reversibility checking and transition matrix calculation which have now been fixed.
The new models can be easily generated by users and modified to include drug-trapping or additional states/parameters.
Types of changes
Testing
Documentation checklist