Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support running Qiskit jobs with multiple circuits #138

Closed
l45k opened this issue Jun 14, 2021 · 5 comments · Fixed by #156
Closed

Support running Qiskit jobs with multiple circuits #138

l45k opened this issue Jun 14, 2021 · 5 comments · Fixed by #156

Comments

@l45k
Copy link

l45k commented Jun 14, 2021

Qiskit supports to submit multiple circuits combined to one job. All circuits in the job are computed as a batch, which reduces the waiting time in the queue. This can be used for computing gradients as well as a batch of inputs for a circuit, for eample when using a KerasLayer.

I would like to call the qnode with a list of inputs that which result in one job send to the qiskit device. The same can be done for computing gradients. In my oppinion, this would also require to change the behavior of the KerasLayer to provide a batch computation method.

@trbromley
Copy link
Contributor

Thanks @l45k! Improved batching support is definitely something we're looking at for PennyLane.

Currently, we have the batch_execute() method added to QubitDevice. Devices that inherit from this can override this method to take advantage of their own batching capabilities, with the Braket device being one example. At the moment, batch_execute can only be useful when calculating gradients, but we're working on how we can extend that to batching more generally.

For the PennyLane-Qiskit plugin, batch_execute() is not currently defined and so just works sequentially. The first step for extending batch support to this plugin would be to write a batch_execute() method that takes advantage of Qiskit functionality. Contributions are welcome!

@l45k
Copy link
Author

l45k commented Jun 23, 2021

Hey @trbromley,
Thanks for your answer! Unfortunately, I already looked through the code myself and could not figure out a way to implement it efficiently. The 'batch_execute()' would be perfect for solving the issue. However, as you already mentioned, it is only implemented (in some devices) for computing gradients. Therefore, more changes in the PennyLane codebase would be required to achieve a 'batch_execute()' for general computations. From what I have seen, the clean way would be to add something like a new BatchQubitDevice, since changing the QubitDevice would require many changes.
What are your thoughts about it?

I will give implementing the 'batch_execute()' in PennyLane_Qiskit a try. I keep you updated.

@co9olguy
Copy link
Member

Hi @l45k, as @trbromley mentions, we are planning to extend batch_execute to be used in more places in devices (and become the default). This should land in an upcoming version of PennyLane soon.

In the meantime, the method batch_execute already exists in QubitDevice, which most plugin devices inherit from. That means one could already call it directly with no need to modify PennyLane code (we haven't added a user-facing UI to make this automatic, but the function is already there 🙂)

Currently batch_execute just does a sequential execution for all received circuits. If you have something more powerful in mind, e.g., sending multiple jobs via the qiskit plugin at one time, one would also have to overwrite batch_execute in the QiskitDevice to do this.

@antalszava
Copy link
Contributor

Hi @l45k, we have this issue on our radar. Would just be curious to check in if you have any further insights here?

@antalszava
Copy link
Contributor

Hi @l45k, with #156 merged, this feature should now be available on the master branch of the repository and will be included in the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants