-
-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wishlist item: mailing list for pre-release warnings #304
Comments
I'm not sure it would be of any great help. We would essentially ask users to test for us and I don't know who has the time/energy to do it. But we could announce it in the IMO, these are gaps in regression testing. If we add tests for each of these bugs, we will continue to make it the package more robust to updates. The other thing we need to do is become more agile with releases. The process is not yet automated, but when we achieve that we could quickly address pressing issues but doing minor releases to stay on top of these things. Just my opinion. |
Hi Adrian,
On Wed, Feb 23, 2022 at 12:38:23PM -0800, Adrian wrote:
I'm not sure it would be of any great help. We would essentially
ask users to test for us and I don't know who has the time/energy
to do it. But we could announce it in the `PyVO` slack channel to
give users a heads up.
Well, I'm severely unhappy about slack, as it is the prototype of the
"subvert a public infrastructure [IRC in this case] and then lock it
out once you have sufficient lock-in of your users", but I suppose
this would still work for most data centres.
So, if we'd document that and nobody protests they want open-web
announcements I'd say go for it.
IMO, these are gaps in regression testing. If we add tests for each
That's true in *some* way. But the truth is that adopters use our
standards in so many creative (in all the word's many meanings) ways
that unit or regression tests would probably only work if we could
get our adopters to contribute them as they go -- and that, again,
would be suboptimal, as that would give us a lot of relatively
fragile tests: They'd depend on external resources that do change
regularly.
of these bugs, we will continue to make it the package more robust
to updates. The other thing we need to do is become more agile with
releases. The process is not yet automated, but when we achieve
that we could quickly address pressing issues but doing minor
releases to stay on top of these things.
Well, doing announced beta releases that people could use to make
sure we didn't break anything would of course be an interesting
alternative, but I suspect that'd be quite a bit more work over an
"advance warning" scheme.
|
Hi Markus, Just to try out the effectiveness of slack, I'm going to post this issue on the channel and invite people to comment on it... Another way to test the PyVO prior to release is to educate users to include PyVO dev (GH main branch) as part of their nightly (periodic tests) such as GH Actions. It is similar with what we have now for |
Most Python applications end up with a stack of dependencies on 3rd-party packages, each of which is capable of upsetting the top-level application with a new release. This house of cards is difficult to manage for any application, with the two main approaches being locking the versions of dependencies and enabling automated tests against dev versions of some of the key dependencies. Of course each package should endeavor to release minor versions that are fully backward compatible, but as mentioned above, it's impossible to protect against all the ways the package might be being used downstream. I like the suggestion of a pre-release notice, as it gives a chance for dependent applications to check once without needing to set up an automated test suite. It still won't cover all issues, so mission-critical applications should probably lock the version of dependencies like For the medium of communication, I would suggest astropy/#pyvo Slack and the IVOA interop mailing list. I'm less enthusiastic about setting up and maintaining a new mail list, but maybe there is something we could try under the auspices of astropy lists or OpenAstronomy. |
Small suggestion: write more tutorials demonstrating common use cases, put them on astropy-learn, and use them as part of the test suite. Also, pay attention to the astroquery test suite - astroquery uses pyvo under the hood a lot, and while it's not there to provide regression tests, it should expand the coverage w/no added effort. |
User notebooks and downstream installable packages are two different beasts. Easier case is the downstream installable package -- Said package devs can add pyvo dev into their CI and nightly tests, and promptly report upstream (or better yet, patch pyvo) when things break. Harder case is user notebook in the wild. If the notebook is curated by your institution, you can maybe test them or jail your runtime environment in a Docker or something similar. But notebook that a random scientist write on their own, no one can catch that. We rely on people to report back when things break and then add such use case to our own CI over time. |
yeah, because learn is in any way robust, well tested and had good devops cycles to report upstream, like they even remotely value infrastructure and stability. :sarcasm: |
I wholeheartedly agree with the above that testing with the dev version should be promoted. I also agree with notion above to extend the test suite or pyvo itself. One obvious place to start with would be the documentation itself to have full examples that are guaranteed to be working correctly and enable doctesting to ensure they stay that way. |
Would it be possible to keep a tally of these resources in the ivoa capacity? Maintaining large body of code examples and notebooks I'm absolutely certain it's not just pyvo that breaks these notebooks, but other packages in the stack, too. So maybe some share ivoa resource is the answer? Or at least that's sounds to be the obvious place for a mailing list, posting heads up about new releases and breakages. |
(And of course the classical approach is to do release candidates before doing API breaking releases, but that would unlikely be any help in recovering any issues with notebooks and user code in a timely fashion) |
On Fri, Feb 25, 2022 at 09:26:54AM -0800, Brigitta Sipőcz wrote:
Would it be possible to keep a tally of these resources in the ivoa
capacity? Maintaining large body of code examples and notebooks I'm
Totally. As it happens, there already is a Registry extension
designed to do that, <http://ivoa.net/documents/Notes/EDU/>. And
there is a UI that pulls these entries out of the Registry and
formats them more or less nicely, <https://dc.g-vo.org/VOTT>.
There's no notebooks in there so far, just PDFs and a few HTML pages,
but this should work for notebooks, too.
However, it's a tedious uphill battle to get people to register their
material, and so I'm a bit reluctant to hope that that's an
appropriate answer. On the other hand, getting people to register
their notebooks would have quite a few ancillary benefits.
So: if someone reading this has a notebook they'd like to see checked
pre-release but isn't sure how to register it: Please speak up (or
contact me privately) so we can see how realistic this plan is.
|
The navo workshop notebooks are one of those, not sure whether they are in that registry. Also, these are not exactly release critical as they are checked against package versions before doing a new workshop, and are updated when neccessary. Anyway, announcement-wise trying out a low traffic mailing list would not hurt. I agree with the above that it's unlikely that people have the capacity to test, but at least they are being warned and won't need to start their investigating from zero. |
Let's just say that STScI notebooks CI is a... work in progress... |
On Fri, Feb 25, 2022 at 11:52:58AM -0800, Brigitta Sipőcz wrote:
The navo workshop notebooks are one of those, not sure whether they
are in that registry. Also, these are not exactly release critical
They're not, but I'd be happy to help their maintainer to get them
in.
as they are checked against package versions before doing a new
workshop, and are updated when neccessary.
It'd still be great if we could somehow use them as regression tests
(which, I think, would be rather feasible when they're registered and
perhaps very lightly marked up such that a machine can figure out
whether their execution failed): Whatever is in such notebooks will
be copy-and-pasted into notebooks of astronomers, and not breaking
these (unintentionally) I'd argue is a worthy goal.
|
Do we need to keep this open longer, what are the action items that we could do? |
Since several data providers now have jupyter notebooks or other code depending on pyvo that we might inadvertently break (Bug #298 being my personal brown-bag example; but, really, given that we're dealing with rather complex contracts here, arbitrary variations of that are conceivable), I wonder if we shouldn't offer a mailing list that people who somehow depend on pyvo could subscribe to, the only purpose of which would be to say "we're going to have a new release in 14 days -- please see if we don't break any of your code", probably including quick instructions for how to get the new release running in a venv.
What do people think?
The text was updated successfully, but these errors were encountered: