Conversation
| // Bit i of the input/output value represents qubit i. | ||
|
|
||
| #[test] | ||
| fn single_qubit_gate_truth_tables() { |
There was a problem hiding this comment.
Is this whole file largely a clone of full_state_noiseless.rs? (Same for gpu_full_state_noisy.rs being largely a copy/paste of full_state_noisy.rs)? Seems like a bunch of duplication to keep in sync. I had expected the idea was to "write once, run any simulator" if the tests are largely the same - is that not practical?
There was a problem hiding this comment.
Yes, this is mostly a clone, but just syntactically, the macros expand to different (boilerplate) code for each simulator. The idea was to design a DSL which made writing and reading many tests for any simulator easy, it was not to write the tests once and run them for multiple simulators. There are a few reasons why we can't write the tests once and run them for any simulator:
- First, the simulators have different capabilities, so we can't write the tests exactly in the same way for all of them, we can't even write the same tests for all of them. For example, there are no rotation tests for the Clifford simulator, and a couple of PRs ago, the CPU and GPU full state simulators did not implement the same gates, we want to keep this flexibility. Also, the internal state of all simulators is different, so the tests are really done differently under the hood, we even skip the tests for the GPU simulator if there is not a GPU available. The design of the three test macros is abstracting these differences and complexity, so the files look mostly like a copy paste.
- The other reason is ease of debugging. When we find ourselves debugging one of the simulators, it is nice to see a contained failure for one test for that simulator, for which we can just hit the
testCode Lens, or even modify it while we debug without affecting the other simulators. - Finally, for the tests that have a probabilistic output, the output won't be the same for all simulators.
There was a problem hiding this comment.
I extended the syntax of the testing macros to allow running the same test for multiple simulators. And made the current tests common for all simulators. I also adjusted the number of shots of the probabilistic tests so that they are the same for all simulators. We lost the benefit of the 2nd point I listed above, but we gained a direct way to verify that all the simulators behave in the same way.
This PR extends the work in #2905 by adding tests for the GPU simulator.