Skip to content

Conversation

@arximboldi
Copy link
Contributor

Hi @rmartinho!

Here are a couple of new features (sorry for mixing them in the same PR but German classes are frying my brain and make even lazier than I usually am ;-). Please refer to commit messages for details.

Cheers!

The idea is that in some cases, we might not want to run a benchmark.
The typical is example is benchmarking various algorithms with
increasing problem sizes (using *parameters*).  For high values of `N`,
we might want to skip those algorithms with poor complexity, but still
compare the other algorithms.

This is highlighted in the new `example8`, where various sorting
algorithms are tested.  When running it like this:
```
    bin/examples/example8 -p N:*:1:2:20 -r html
```
the following output is obtained:

    https://sinusoid.es/misc/nonius/skip-example8.html

The way has been implemented includes the following changes:

- Benchmarks continue running after one benchmark throws an exception.
  This means that one can skip the benchmark just by throwing an
  exception.  Also, this means that the system is more resilient now and
  it outputs all other available results even if a benchmark actually
  fails because of an error condition.

- A new kind of exception `skip_error` has been added to explicitly
  signal that we would like to skip this benchmark.  This exception may
  be thrown with `nonius::skip()` for convenience.

- The HTML reporter now takes into account that some benchmark runs may
  be missing results and generates sensible output in that case.
Benchmarks are run in the order they are written in the file.  If the
reporter preserves it, this gives the benchmarks author control over
what is the best way to present the results of the benchmarks.

However, this was not the case because benchmark results where stored in
an `unsorted_map` that has an unspecified order.  By storing them in a
vector that is populated as the benchmarks are run, we do preserve the
run order.
In some cases, specially in the bar graph summary view when many
benchmarks are present, it can be useful to sort benchmarks based on
their results.  This commit adds a checkbox labeled `sorted` to the top
bar --where the plot chooser is-- that enables sorting the benchmarks by
order.

Note that sorting the benchmarks changes the color and checkmarks of the
traces.  It might be better to preserve the initial ones, but I was too
lazy to do so--I believe that we would need to manually asign colors and
markers instead of relying on plotly auto-asigning them.

Here is an example of the results:

    https://sinusoid.es/misc/nonius/sorting-example8.html
arximboldi added a commit to arximboldi/immer that referenced this pull request Oct 26, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant