-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
+ local cache on occurence requests #53
Comments
@7yl4r, I'm not sure this is going to work, as the occurrence identifiers are not sequential and not persistent across dataset updates. When a dataset is updated, the old version is completely removed and replaced with the new version where the occurrences will have new identifiers. The occurrences we receive from data providers often do not have globally unique identifiers, so it's not trivial to determine which individual records have been added, removed or edited. |
That's unfortunate. Maybe something could be done with the |
Using Perhaps you could follow this workflow:
I suppose this would offer some improvement, although you need to be aware that some nodes regularly regenerate their whole IPT, which makes it look like all datasets have been updated. |
Thank you Pieter. 🙌 If I'm ambitious in the coming weeks I may try implementing this and submit a pull request. I am very glad I asked before assuming occurence ids were sequential. |
@7yl4r Ok, I'm pretty busy right now but I'll try to add the necessary published date parameter to |
I'm doing repeated queries of occurence records which can return large amounts of data.
Rather than downloading it all every time I plan to save to a cache file and update only with the newer records.
I think this can be accomplished by saving the df as a
.rds
file including the "after" occurence id.Example:
This will speed things up a lot for me and reduce load on OBIS servers.
As a bonus, a filepath could be passed to the
cache
param, giving the user control over where the cache file is stored.prereq: #7
The text was updated successfully, but these errors were encountered: