Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing features for new layout #336

Closed
12 of 16 tasks
andrea-pasquale opened this issue May 16, 2023 · 10 comments
Closed
12 of 16 tasks

Missing features for new layout #336

andrea-pasquale opened this issue May 16, 2023 · 10 comments
Milestone

Comments

@andrea-pasquale
Copy link
Contributor

andrea-pasquale commented May 16, 2023

I've opened this issue to collect a list of missing features for the new layout of qibocal (autocalibration).

@aorgazf @DavidSarlle @maxhant @ingoroth @wilkensJ @vodovozovaliza @igres26 @Edoardo-Pedicillo @alecandido @stavros11 @scarrazza feel free to drop any suggestions here.

@wilkensJ
Copy link
Contributor

Thank you Andrea, that list looks great. I have one suggestion, but it is more like an overall suggestion.
I think the result objects you introduced are a very powerful tool to implement many features, for example:

  • Make post processing repeatable (meaning repeat a fit with other initial values, with another fitting function, even use a different post processing scheme, ...)
  • Display the result differently, depending on the user (for example for qq an html string is returned, but if I want to look at the result from a different angle I can just import the result in a script/jupyter notebook and play around with it)

Also, I don't know what the status concerning the optimization loops is, but that also would be great, @vodovozovaliza already coded something in that direction.

@andrea-pasquale
Copy link
Contributor Author

Thanks @wilkensJ.

  • Display the result differently, depending on the user (for example for qq an html string is returned, but if I want to look at the result from a different angle I can just import the result in a script/jupyter notebook and play around with it)

I had something similar in mind when I put the entry change plotting mechanism. I will expand it a bit more.

  • Make post processing repeatable (meaning repeat a fit with other initial values, with another fitting function, even use a different post processing scheme, ...)

With this you mean the possibility to pass to the runcard which fitting/post-processing you would like to do or in general the possibility to repeat the fit/report generation after the data has been generated? I've already put the last one in #335

@wilkensJ
Copy link
Contributor

Great!

With this you mean the possibility to pass to the runcard which fitting/post-processing you would like to do or in general the possibility to repeat the fit/report generation after the data has been generated? I've already put the last one in #335

I meant the latter, sorry I was not aware of #335. That sounds great.

I had something similar in mind when I put the entry change plotting mechanism. I will expand it a bit more.

In general, in my opinion, the result objects should know how to display themselves. So for example, there is only one method which is called when the figure is being build, something like result.make_html(), all the figures and tables and boxes and values are wrapped correctly and displayed correctly. In my opinion, this approach features a more flexible way of displaying results.

@DavidSarlle
Copy link
Contributor

@andrea-pasquale I think we should create the html report after each execution of the routines integrated in the "auto calibration" action runcard. Otherwise, if the last routine of a set of executed routines fails, the report is not generated, but the rest of functions were correctly executed.

@andrea-pasquale
Copy link
Contributor Author

Thanks @DavidSarlle.
I've added it to the list.

@alecandido
Copy link
Member

@wilkensJ @DavidSarlle, generating HTML in the routines is against the modularity principle, since the final HTML is little to nothing composable.
Moreover, display logic should not be mixed as much as possible (if possible not at all) with the runtime logic.

So, the generation of figures and tables is reasonable to be defined in the context of the routines, because in that case you are very strongly coupled to the output content.
However, once the figures and tables are generated, they can be treated as perfectly anonymous assets by a report generator.

Moreover, we do not even need to run the plotting/tabling and report generation in the same job that is running data acquisition and fitting. Once that data and fit results are available, they will be available on disk, and they can be further processed by a different job, that has only to be aware of the output structure.

So, the plan for auto reports (and post-processing in general) is the following:

  • plot functions will be defined in routines, to abstract the content structure to more generic elements (consuming the content information)
  • report will be fully separate from the auto calibrator: at some point we might even consider implementing reports in a different language than Python (i.e. I'm considering it, since it makes much more sense to have the presentation written in JS, being an HTML page)
    • as said above, it will only have to be interfaced to the output
  • once you will run the auto calibrator, you receive no other output than the raw data and fit results (it is automated, in principle you might even not need to take a look at the report, so better not to waste resources by default)
  • it will be possible to run a report generator on an output folder
    • thus there is no need to take into account a routine failing: if failing, the output will be partial, but it would not affect any other output
  • sooner or later, we can also have a self-updating report
    • I would even call it live, but it won't be live in the former sense, since it will update only when more output will be available, i.e. at the end of each routine
    • for this, we will only need a listener on file-system, that is quite simple and supported by external libraries (in a quite portable way)

@rodolfocarobene
Copy link
Contributor

Hello, we have a small feature request!
Currently, there are some routines where we need to provide some pulse parameter (amplitude for example).
When using them with multiple qubits, it would be nice to be able to have different values for different qubits, for example:

...
...
qubits: [0, 1, 2]
...
...
   - id: qubit spectroscopy
     priority: 0
     operation: qubit_spectroscopy
     parameters:
       drive_amplitude: 0.08        #  this could be [0.08, 0.1, 0.2] or also just 0.08
       drive_duration: 2000         #  this could be [2000, 1000, 3000] or also just 1000
       freq_width: 50_000_000
       freq_step: 100_000
       nshots: 1000
       relaxation_time: 100_000

This should be fairly easy to implement, but a bit annoying since requires changing in a lot of routines.
Also, maybe this is already covered in #351...
What do you think?

@andrea-pasquale
Copy link
Contributor Author

andrea-pasquale commented Jun 1, 2023

I think that it is reasonable. For some routines if those parameters are not provided we use the values written in the runcard. In this particular case of qubit spectroscopy those values are not written anywhere in the runcard so it should be make sense.

However, I understand that when characterizing having more flexibility it helps.
For me the expected behavior should be the following:

  1. if no value is specified we fall back to runcard values or we raise an error if they are not present in the runcard
  2. if we provide a single value that value will be used for all the qubits
  3. if we provide a list each qubit will get a custom value

Let me know if you agree @rodolfocarobene

@rodolfocarobene
Copy link
Contributor

Thanks @andrea-pasquale. Yes, i think that would be ideal

@andrea-pasquale
Copy link
Contributor Author

Closing this as we already have separate issues dedicated to those points

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants