Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

restore POST functionality to osrm-routed #5794

Closed
rasputnik opened this issue Aug 6, 2020 · 6 comments
Closed

restore POST functionality to osrm-routed #5794

rasputnik opened this issue Aug 6, 2020 · 6 comments
Labels

Comments

@rasputnik
Copy link

I'm aware this was raised a couple of years back ( #4211 ), happy to comment there if you like but we're having trouble with the decision to not support POST requests here. In our use case, we are specifically sending data to the /route/v1/driving/ endpoint. Many of our routes are 1000+ datapoints, so the URLs become ridiculously long.

We can fix the URL length in clients etc, but many load balancers (which we use to add https and authentication to the service we're running) really dislike URLs that long. For many cloud providers, there simply isn't the option to increase header buffers to that extent and we're forced to roll our own load balancing.

There's also the issue that some of the location data being sent (to map match) are potentially sensitive so access logs containing full routes need to be secured etc.

Would restoring POST support be something you'd consider? Or are we just going to have to live with this?

@systemed
Copy link
Member

Ultimately I'd like to see us move to a more robust HTTP server, as per https://github.com/daniel-j-h/libosrm-http-casablanca, which could both enable more flexible requests and make it easier for users to customise their osrm-routed server. Using the Node bindings is of course also an option.

@drew887
Copy link

drew887 commented Dec 11, 2023

Here to give this bump; this is a pretty serious issue since as @rasputnik pointed out, when you are at the mercy of the of cloud providers you have no solution to this problem and they will simply tell you that you must use POST for requests this large.

And while the node bindings being used inside of something like express seems to get tossed around as the magical "fix" for this in the issues here; the same thing holds that we might not be able to use said node bindings (see the endless issues with npm being an attack surface); and that it has become a standard that requests that can have extremely large numbers of parameters/inputs should be done as a POST/PUT/not-a-GET request even just at a semantic level, but also for all the myriad of reasons you've gotten in all the issues around this point over the years.

There is clearly demand from users. And while normally I'd be first on the boat of, "if we as the users want it we should open a PR that adds it", the existing closed issues around this make it seem that you would not accept such a PR to begin with.

So, I'm curious, what is the overall major roadblock you have towards making it a POST request? Is there some code level issue that makes it impossible/unfeasible? If so maybe the solution is to document that fact somewhere?

@nilsnolde
Copy link
Contributor

nilsnolde commented Dec 11, 2023

(not a OSRM contributor) It's definitely not infeasible/impossible, very surely just a time issue. There's no active development for a while now, people are just busy somewhere else for the time being. That might change again any day. Still, those people might not have their priorities on supporting POST. I also never heard of anyone doing consultant work on OSRM. I always planned to do that at some point, but that always gets pushed further, Valhalla is already quite a beast to deal with.

Anyways, I'm digressing. Switching to a completely different server library requires time & effort and to me it seems previous attempts were trying to patch the existing boost (demo) solution. One alternative could actually be https://github.com/kevinkreiser/prime_server. That's not going into maintenance mode anytime soon, not as long as Valhalla doesn't (and all the companies using a private fork). It's still missing a few semi-important things, like OPTIONS support for pre-flight requests and gzip compression, but both of those things can be dealt with on apache/nginx level which you'd likely have anyway. It can handle POST of course, supports a graceful shutdown, client & server-side interruption and offers a few more interesting goodies, but none of which are really relevant for OSRM (e.g. PBF MIME type & unix domain sockets). I'm obviously biased, but I think it could be a good fit. Also note, that doesn't have to be implemented right in OSRM necessarily, it could easily be another project implementing libosrm without any changes to upstream. In the end that'd be no different than doing it right here. It's just an executable in the end that's running a number of server threads in the background. Just here the library would come for free.

@nilsnolde
Copy link
Contributor

I think that'd be your best chance really. If you need some inspiration, you can look at projects like https://github.com/VROOM-Project/vroom, which implements libosrm, albeit not for a server directly.

@danpat
Copy link
Member

danpat commented Dec 12, 2023

@drew887 We'd absolutely accept a PR - but as @SiarheiFedartsou discovered with #6294 (which ended up getting reverted), it's never as easy as you think it'll be.

Copy link

github-actions bot commented Jul 8, 2024

This issue seems to be stale. It will be closed in 30 days if no further activity occurs.

@github-actions github-actions bot added the Stale label Jul 8, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants