IOHexperimenter should have simple workflows for answering the following question: "is my solver better than state of the art?".
This requires an automated experimental pipeline: run an algorithm configuration to tune the solver's parameter, then estimate its performances on a benchmark (with optional cross-validation).
To be as simple as possible, this experimental pipeline should be able to run seamlessly on an HPC cluster, without much intervention from the user.
A good way to do that would be to use modern workflow managers, like SnakeMake or Nextflow, or implement the design of experiment through openMole.
IOHexperimenter should have simple workflows for answering the following question: "is my solver better than state of the art?".
This requires an automated experimental pipeline: run an algorithm configuration to tune the solver's parameter, then estimate its performances on a benchmark (with optional cross-validation).
To be as simple as possible, this experimental pipeline should be able to run seamlessly on an HPC cluster, without much intervention from the user.
A good way to do that would be to use modern workflow managers, like SnakeMake or Nextflow, or implement the design of experiment through openMole.