-
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timestamps range boundaries are incorrectly checked #31
Comments
Hi @mikolaj-kow, thanks for bringing this up. I am aware of this issue but ended up keeping the current behavior because. The main reason is that MRT files sometimes do include messages at the exact end timestamp. For your example, it included the previous file because it might contain messages announced at the exact hour mark. I am open to change the default behavior to not include the files if its end time overlaps with the start time of the filter, and add additional flag for the current behavior. |
Hi, I tried to apply a Parser timestamp filter, to overcome this behaviour of Broker and have a precise timerange. This should be possibe, however in python it fails with |
@mikolaj-kow it seems like there is some inconsistencies on the naming of filters. Could you try |
I was able to successfully use the following dictionary as Parser filter |
Good to hear! Yeah, the string conversion is expected. The timestamps parameters accept both unix timestamps and rfc3999 time strings (e.g. |
Getting kind of offtopic ;) but wanted to let you know, I had no luck with either of these: The last one is ISO format from |
This is great catch! Will attempt to patch these later today. |
I've tested the Broker.query() as well, for a more complete picture. For timestamps, RFC/ISO formatted datetime without timezone and with Zulu works without issues, even with microseconds it's ok RFC/ISO formatted datetime with timezone with negative offset like
|
Hi @mikolaj-kow, thank you for your efforts! Are you using the default broker instance or self-hosted one? I've updated the default instance to allow float number search. I cannot however reproduce the other timezone-related issues you mentioned. (see the updated test here: https://github.com/bgpkit/pybgpkit/blob/7f4e6cd2d654c505821554daf720d10a9415fce8/bgpkit/test_integration.py#L28) |
You're welcome. I'm happy I can help a bit. I'm using a self-hosted docker instance due to a considerable amount of queries. It appears that the default broker works just fine for the same query.
However if I escape the + sign with
The broker-api container logs
|
@mikolaj-kow Thank you for providing us with the detailed logs! Currently, there is a tech-stack discrepancy between our hosted version and the docker version. Our hosted version has been updated to a serverless JavaScript-based API, which can handle concurrent requests more efficiently and integrate seamlessly with our backend Postgres instance. On the other hand, the docker version remains a Python-based API, and there are some differences in the parameter checking. We are actively working on closing this gap by incorporating a Postgrest instance into the docker setup, which will enable querying via HTTP and allow us to reuse the same JavaScript API for the self-hosted version. We anticipate that this upgrade will be completed in the coming weeks. In addition, we have made significant improvements to our hosting infrastructure, and the default broker is now fully equipped to handle large numbers of requests. Please feel free to utilize it for your purposes. If you have any further concerns or require customization, please do not hesitate to reach out to us at [email protected]. |
Greetings,
in the broker backend documentation, the timestamp is defined as the time of the beginning of the data dump, usually aligns on the full five-minute marks.
For example, I want to request 15 minutes of data, therefore i expect to receive three
BrokerItem
s. However, broker produces fourBrokerItem
.This results in
If I try this:
I get this;
So it seems that timespans I defined are already shorter than 15 minutes
Easy workaround is using
ts_start="2014-04-01 00:00:01", ts_end="2014-04-01 00:14:59"
Best regards
The text was updated successfully, but these errors were encountered: