Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ro_optimization in autocalibration #392

Merged
merged 41 commits into from
Jul 20, 2023
Merged

Conversation

Edoardo-Pedicillo
Copy link
Contributor

@Edoardo-Pedicillo Edoardo-Pedicillo commented Jun 8, 2023

This PR implements the ro frequency optimization and a new cumulative function using numba.
Checklist:

  • Reviewers confirm new code works as expected.
  • Tests are passing.
  • Coverage does not decrease.
  • Documentation is updated.

@Jacfomg
Copy link
Contributor

Jacfomg commented Jul 5, 2023

Can we try to have this working, or if you have to work on something else can I take it ?

@andrea-pasquale
Copy link
Contributor

We might need this also to fix #337 given that when optimizing the freq we can clearly see in which direction the clouds are rotating.

@codecov
Copy link

codecov bot commented Jul 8, 2023

Codecov Report

Merging #392 (48e2476) into main (4c54480) will increase coverage by 0.11%.
The diff coverage is 100.00%.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #392      +/-   ##
==========================================
+ Coverage   97.03%   97.15%   +0.11%     
==========================================
  Files          48       49       +1     
  Lines        3107     3233     +126     
==========================================
+ Hits         3015     3141     +126     
  Misses         92       92              
Flag Coverage Δ
unittests 97.15% <100.00%> (+0.11%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
src/qibocal/protocols/characterization/__init__.py 100.00% <100.00%> (ø)
...bocal/protocols/characterization/classification.py 100.00% <100.00%> (ø)
...zation/readout_optimization/resonator_frequency.py 100.00% <100.00%> (ø)
src/qibocal/protocols/characterization/utils.py 94.82% <100.00%> (+0.54%) ⬆️

@Edoardo-Pedicillo
Copy link
Contributor Author

Sorry for the number of commits, but I had some troubles with the stash.
Now the PR is ready, I tested it on hardware only with one qubit, when I tried with multiple ones I lost the connection with the instruments and I am not able to reconnect to iqm5q for the time being. These days I have to attend a school, @andrea-pasquale @Jacfomg could you please test the routine with multiple qubit?

Copy link
Contributor

@andrea-pasquale andrea-pasquale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Edoardo-Pedicillo.
I still need to test the routine on hardware, generally looks good to me.
Please find below some suggestions.

@Jacfomg
Copy link
Contributor

Jacfomg commented Jul 10, 2023

Sorry for the number of commits, but I had some troubles with the stash. Now the PR is ready, I tested it on hardware only with one qubit, when I tried with multiple ones I lost the connection with the instruments and I am not able to reconnect to iqm5q for the time being. These days I have to attend a school, @andrea-pasquale @Jacfomg could you please test the routine with multiple qubit?

I was testing it and the cumulative takes the times we measured. Now all the time the routine takes is on the execution. As there usually short time-scale noise going on that can be seen on weird blobs that decrease fidelity I suppose we should set a high default value for nshots if we want to avoid to do software averages. Also, checking if just using a high number of shots would help could be measured running a calibrate qubit states with many points using now the new cumulative function.

@Edoardo-Pedicillo
Copy link
Contributor Author

I suppose we should set a high default value for nshots if we want to avoid to do software averages.

Yes indeed, I have not implemented the software average for this reason. Maybe I can add this as a suggestion somewhere.

Copy link
Contributor

@andrea-pasquale andrea-pasquale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Edoardo-Pedicillo.
I tested it on hardware and it seems to work as expected.
Just a few minor comments than we can merge.
Could you also have a look at the coverage?

Comment on lines +162 to +200
iq_state0 = data[qubit, 0][data[qubit, 0].freq == freq][["i", "q"]]
iq_state1 = data[qubit, 1][data[qubit, 1].freq == freq][["i", "q"]]
iq_state0 = iq_state0.i + 1.0j * iq_state0.q
iq_state1 = iq_state1.i + 1.0j * iq_state1.q

iq_state1 = np.array(iq_state1)
iq_state0 = np.array(iq_state0)
nshots = len(iq_state0)

iq_mean_state1 = np.mean(iq_state1)
iq_mean_state0 = np.mean(iq_state0)

vector01 = iq_mean_state1 - iq_mean_state0
rotation_angle = np.angle(vector01)

iq_state1_rotated = iq_state1 * np.exp(-1j * rotation_angle)
iq_state0_rotated = iq_state0 * np.exp(-1j * rotation_angle)

real_values_state1 = iq_state1_rotated.real
real_values_state0 = iq_state0_rotated.real

real_values_combined = np.concatenate(
(real_values_state1, real_values_state0)
)

cum_distribution_state1 = cumulative(
real_values_combined, real_values_state1
)
cum_distribution_state0 = cumulative(
real_values_combined, real_values_state0
)

cum_distribution_diff = np.abs(
np.array(cum_distribution_state1) - np.array(cum_distribution_state0)
)
argmax = np.argmax(cum_distribution_diff)
errors_state1 = nshots - cum_distribution_state1[argmax]
errors_state0 = cum_distribution_state0[argmax]
fidelities.append((errors_state1 + errors_state0) / nshots / 2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same operation that we are doing for the single shot classification, right?
We should be able to recycle that function in one way or another.
Did you speed up also the fitting in the single shot classification?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same operation that we are doing for the single shot classification, right?

Yes

We should be able to recycle that function in one way or another.

If not strictly necessary now, I prefer to do it in the (near) future PR to port the classification models.

Did you speed up also the fitting in the single shot classification?

Yes, only the cumulative part.

Copy link
Contributor

@andrea-pasquale andrea-pasquale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Edoardo-Pedicillo, looks good to me.

@andrea-pasquale andrea-pasquale added this pull request to the merge queue Jul 20, 2023
Merged via the queue into main with commit ec7e5cc Jul 20, 2023
@andrea-pasquale andrea-pasquale deleted the auto_ro_optimization branch July 20, 2023 07:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants