As discussed in the OpenMS conference, we should think of coming up with a trigger on release of those nextflow pipelines, that checks agentically if the current implementation is exactly producing the output of the nextflow pipeline. This would reduce maintenance burden and ensure reproducibility between the nextflow workflow and the streamit representation.
To check if the output is the same, we could e.g. parse the nf-test snapshot containing content of the resulting test runs (https://github.com/nf-core/mhcquant/blob/master/tests/default.nf.test.snap) and compare them with the streamlit app results