-
Notifications
You must be signed in to change notification settings - Fork 729
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Push with no client request can exhaust the JVM heap #7657
Comments
Originally by mpilone Attachment added: |
Originally by proaccountapp Updated prioritization date. |
Originally by @Artur- The best workaround right now is probably to set a suitable poll interval. A possible solution for this would be to make the client side acknowledge pushes from the server side, giving an opportunity for the server to do cleanup. |
Originally by @Artur- The client should already be aware of when the situation happens by detecting that the server pushed a message which caused connectors to be unregistered. It could acknowledge the unregistration in these cases by sending an empty XHR. |
Originally by @Artur- |
Originally by @hesara Fix merged to master. |
Originally by mthomas Can this please be merged into the 7.6 branch? |
Originally by mpilone
I ran into a problem in our production system where the JVM was running out of heap space. I tracked the problem down to our use of push on a specific view in our application.
The application has a single view that refreshes periodically with a list of message subjects similar to an email client. The list contains about 100 items displayed in a Vaadin Table with two generated columns (a button and a Label). The Table has a container data source.
Every 15 seconds, a server side thread loads new messages from the database, empties the container, and add an item for each message. Each item has about 15 properties that are then set on the new item. The container is already set on the Table which appears to generate a ton of component add/removes because the table refreshes its internal cache on the container modify events.
For each component removed, the connector ID is added to the ConnectorTracker via the unregisterConnector call. The ID is then added to the syncIdToUnregisteredConnectorIds map. This map is then purged in the cleanConcurrentlyRemovedConnectorIds call from the ServerRpcHandler.
Now to the actual issue. The application only has push enabled and no polling interval. Users open this view and leave it sit on a display for hours at a time. What I found, is that with no polling interval, the cleanConcurrentlyRemovedConnectorIds is never called because the ServerRpcHandler is never invoked by anything. That means we end up with 10s of thousands of connector IDs in the syncIdToUnregisteredConnectorIds map which eventually exhausts the heap.
Obviously the application has a design issue with the way the table is being regenerated and items added to the container which is causing way more component churn than desired and I'm going to address that in my code. However the issue of the connector ID map continually growing is a problem because even with only a few component removals, the map will grow indefinitely over time.
One work around is to enable a polling interval just to force the map to be flushed even when using push, but that only works if the polling interval is fast enough to purge the memory before the heap is exhausted. In well written applications that should work, but in my case, with the poorly written container updates it is possible to generate enough container IDs in the map to exhaust memory pretty quickly. The speed of exhausting the heap is multiplied by each user on the view because each has its own ConnectorTracker.
In the end, I think the defect is that updates that are successfully pushed to the client do not purge the unregistered connector IDs map automatically. Without a poll interval (and waisted round trips), it is possible to exhaust the heap over time on a single view. Using the poll interval helps, but it doesn't guarantee the issue won't appear because multiple push operations (and therefore many component removals) could occur before the next poll occurs.
If maintaining the polling interval is the recommend fix (which is OK, but not great), it should be documented somewhere (maybe in the Push annotation) that even with push enabled a poll interval should be used to purge the connector ID map.
Imported from https://dev.vaadin.com/ issue #19822
The text was updated successfully, but these errors were encountered: