-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix qiskit-gpu when precision is set #36
Conversation
Thanks, but here you have simply renamed |
Yes, because A quick test is the following:
will run on CPU if using the |
I see, thanks for spotting this issue, I am curious to see its performance. |
Here are the fixed plots, with the numbers for other libraries kept the same: Single precision is not very impressive but I remember we also observed this in qiboteam/qibojit#53 for the multi-qubit operator. Double precision looks as expected, so I guess from our side the scripts are correct and properly running on GPU now. It's nice to see that the CPU/GPU ratios are consistent for all libraries both on 20 qubits (CPU faster) and 30 qubits (GPU faster), especially in double precision. Qibojit GPU is surprisingly fast which I hope is true and not due to a bug or something! |
Thanks!
Well, you know, this is very unlikely. If the simulation numbers are ok, then we are fine, given that we can reproduce this setup on multiple devices. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Fixes the issue we have with qiskit-gpu for which performance appears similar to qiskit-cpu in the library comparison bar plots. Apparently our qiskit-gpu falls back to cpu when the precision option is used in the compare.py script, due to a typo in the simulator option definition.