-
-
Notifications
You must be signed in to change notification settings - Fork 482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weakly reference binary operation codomains #14058
Comments
Attachment: 14058-weak-binop.patch.gz |
comment:2
Is it possible to add an example that shows that parents become collectable that previously weren't? |
Doctest proposal |
comment:3
Attachment: trac_14058-doctest.patch.gz I like the idea. Can we get some data on speed regressions due to this? In principle the double lookup might be noticeable. I don't mean speed regressions due to parents getting deleted -- those are good to know but need other fixes. I mean when the parents are still around. Attached doctests may need #12313 to work, in which case this ticket should depend on that (because then that's necessary to get benefit from this ticket). |
comment:4
Sadly, the performance hit is quite noticeable.
Within a run, timing
without patch
For The penalties seem to be about This indicates to me it may be worth trying storing a weakref to the morphism instead, since that can probably be dereferenced faster than a coercion lookup on the codomain. Logically this should be equivalent. We're just storing a weakref to the object we're interested in rather than instructions where to go and lookup the object we want. Caveat: For stored coercions of the type However, for stored coercions of the type Major problem for immediately applying this idea:
|
comment:5
Yeah, I had exactly this same idea. I'll post an updated patch. |
comment:6
Attachment: 14058-weak-binop-morphisms.patch.gz Apply 14058-weak-binop-morphisms.patch and trac_14058-doctest.patch |
comment:7
Great! With this new patch I cannot distinguish the timing differences from the noise one normally gets already, so I'd think this is quite acceptable. |
comment:8
patchbot seems unhappy about the doctest.Probably #12313 is indeed necessary to make parents reclaimable. |
Dependencies: #12313 |
comment:9
Stating in the ticket description what patches are to apply. Admittedly I didn't test yet whether the doctest patch applies after Robert's new patch. But the test patch is needed, I think. Apply 14058-weak-binop-morphisms.patch and trac_14058-doctest.patch trac_14058-doctest.patch |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
comment:11
The patches apply, and at least the new test passes. So, this together with #12313, does fix another leak. |
This comment has been minimized.
This comment has been minimized.
comment:13
In vanilla Sage, the coercion model stores (coercion) morphisms in its cache (which was a In vanilla Sage, the coercion model kept a strong reference to the morphism, which kept a strong reference to domain and codomain, which were thus not reclaimable, and so the item of the With the patch, the coercion model keeps a weak reference to the coercion morphism, hence, the strong reference from the morphism to domain and codomain doesn't matter, and thus the item of the But how is the morphism kept alive? The coercion model only has a weak reference to it. Do I understand correctly that the morphism involved in the binary operation is, at the same time, stored in the coerce dict of its codomain? That's why it does not immediately die? In other words, do I understand that the layout is as follows? We have parents A and B, and want to apply some binary operator, with a result in the parent C (C may coincide with either A or B). And we have maps r_A and r_B from A or B to C. Both r_A and r_B are stored with a strong reference in a cache located in C, with weak keys A and B. At the same time, they are stored with a weak reference in the coercion model, again with weak keys A and B. r_A has strong references to C and to A, r_B has strong references to C and to B. What do we want? Do we want that keeping C alive makes A and B survive? Or do we want that keeping both A and B alive makes C survive? If the user has a strong reference to C, then C has a strong reference to r_A and r_B (in its coerce cache), and r_A/r_B strongly refers to A/B. Hence, the existence of C keeps A and B alive. Since C is a complicated object, it is more likely to be mortal, hence, probably it is not much of a problem. If the user has a strong reference to both A and B, then C keeps a strong reference to both r_A and r_B, and the coercion model keeps a weak reference to r_A and r_B. Moreover, r_A and r_B strongly refer to C. However, isn't it just a reference cycle between C and r_A/r_B? It would not prevent C from becoming collectable, right? I doubt that we want that. It is not desirable that, when adding elements of Or did I misunderstand something? |
An example that shows that the current patch needs work |
comment:14
Attachment: testcoercion.py.gz
In my opinion, that is exactly what we want, or at least what we have to settle for. If a person wants to work efficiently with In this scenario:
If we keep the I don't think that we can afford to assume that just because a user has combined two parents he/she will do so again in the future. We may be able to afford doing so if it happened recently, but that really is an optimisation, and currently I don't see a hook where we can do this efficiently. A limited length buffer that gets updated whenever a certain component sees a parent go by might work. In Coercion discovery perhaps? That's already slow and it would exactly protect these automatically generated parents. Logically it's hard to defend but it may be OK in practice. Another possibility is just The problem with artificially keeping transient elements alive for longer is that you'll defeat the heuristics for generational garbage collection (which Python does), and we depend rather heavily on that to get rid of cyclic garbage. Testing would be required to see if this is indeed a problem, and it will be very application-dependent. |
comment:15
Before posting a reply to Nils' post, here is an example that the existence of A and B does not ensure the survival of the pushout of A and B. First, attach attachment: testcoercion.py into a Sage session. Then:
I do believe that this is not desired behaviour. |
comment:16
Replying to @nbruin:
Granted. But it is a very typical usage to not explicitly convert everything into a common parent, but let the coercion model do the work. That is what the coercion model is for!
However, in the example that I posted above, note that multiplying I think it is not acceptable that a multiple creation is triggered.
In your example,
OK, but not all unique parent structures rely on it. |
comment:17
Replying to @simon-king-jena:
...
But how do you tell the difference? Against too little caching there is a simple defense: keep a reference yourself. Against too much caching there is nothing you can do. You just run out of memory because data structures outside your control blow up. We clearly have too much caching presently, in some cases by design. I think we first need a design that's fairly clean and for which we can reliably reason there's not "too much" caching (changing the order of memory requirement is definitely "too much" for me). Next step is tweaking it, possibly with MRU queues (which in that case should really be investigated for interfering with efficient GC, which is based on "objects either live very short or very long", so artificially creating objects with a medium lifespan is bad) |
comment:18
By the way, it seems that the "insufficient" caching (not enough to keep C() alive) has no influence on performance, at least not if computations are done in a close cycle:
In the first run of test(a,b), there is the possibility that Is there a way to get the patchbots report significant regressions in some examples? Because I am sure that some parts of Sage (schemes and combinat are typical candidates) rely on a strong coercion cache. |
comment:19
Replying to @simon-king-jena:
Ah right, because NOTE: Doesn't |
comment:20
I think it makes sense to ask sage-devel whether it is better to fix this leak or not. I believe that keeping A and B alive should prevent C from garbage collection. In contrast, keeping C alive should not prevent A and B from garbage collection. Currently, it is the other way around. Perhaps we should try to think through whether having a weak reference to the domain of coercion or conversion maps would work. |
comment:62
The failing doctest is reproducible after applying this branch onto |
comment:63
On sage-6.9.beta1, I get:
On sage-6.9.beta1 + #14058, I get a very good improvement:
Note: I originally thought it was a regression in sage since I was getting this earlier this morning with sage-6.8 + #14058. But thanksfully, it seems to be a improvement obtained from this ticket. The file
|
comment:64
I also get improvements on tests above of Simon King at comments 18. With sage-6.9.beta1:
With sage-6.9.beta1 + this ticket:
(My file testcoercion.sage contains testcoercion.py + the lines at comment 18.) |
comment:65
With sage-6.9.beta1 + this ticket, I get
but I get the problem when I test after the build
which gives the error:
I have never seen such a behavior. |
Branch pushed to git repo; I updated commit sha1. Last 10 new commits:
|
comment:68
In other doctests in the same file, One way to fix this is to call (my first fix was using sets of ids... sorry for the noise). |
comment:69
Replying to @seblabbe:
I agree, that's the way to fix it. |
comment:71
I assume Simon's comment gives positive review here as this was the only thing to fix. |
Changed reviewer from Simon King, Frédéric Chapoton, Jean-Pierre Flori to Simon King, Frédéric Chapoton, Jean-Pierre Flori, Sébastien Labbé |
Changed dependencies from #12313 to none |
Changed branch from public/ticket/14058 to |
Changed commit from |
comment:76
On arando:
|
comment:77
See #19244 |
R(1) + S(1) caches a strong reference to both R and S, which we'd like to avoid.
See discussion at https://groups.google.com/forum/?fromgroups=#!topic/sage-devel/ozhIHIwQ4WA
CC: @nbruin @simon-king-jena @jpflori
Component: memleak
Author: Robert Bradshaw, Nils Bruin
Branch:
13cb2a7
Reviewer: Simon King, Frédéric Chapoton, Jean-Pierre Flori, Sébastien Labbé
Issue created by migration from https://trac.sagemath.org/ticket/14058
The text was updated successfully, but these errors were encountered: