-
Notifications
You must be signed in to change notification settings - Fork 525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed issues with Valorant patching #209
Comments
Noticing the same issue with lol, assuming its a riot issue as the launcher is deciding that their is no connection to the servers and ending the connection (error 004 patching failed). The launcher will then restart the download for a few seconds before erroring out again. |
Are there still speed issues with Valorant and LoL? |
the patching system for riot requests lots of small byte ranges. This means that your client will behave very strangely through the cache. You will see it do nothing for a while, at this time the cache is fetching all the 1m slices that those byte ranges relate to. Then you will see it spike to max as it hands back a chunk at max speed. In the worst case you have many small byte ranges all in different slices. Its not something we think we can do anything about but perhaps @v3n can shed some light? |
I've sorted the issues by disabling slicing. I don't recommend this as performance is worse when caching other games but as I run a very small LAN on a slow connection having to wait for riot to patch with clients seeing 0.1kb/s for several minutes was just impossible as they would start messing with their DNS and restarting the client to try and solve the issue themselves. |
Interesting, do you have an idea how it impacts other services like windows updates? |
No obvious downside with windows update but hard to benchmark. BattleNet is even worse on first download but as there is a prefill tool for it its much less of an issue. Some steam games also have this issue (i presume this is because they contain large files) but again the prefill tool negates most of the issue. I think optimally 2 instances should be used to give the best performance but as I'm only on Gigabit Lan it works well enough without slicing. |
@v3n do you have any info on this, it has been open for a while. The workaround of @Sidesharks does help a bit but is not recommended by the lancache docs. |
Is there any update to this case? @v3n are you still able to provide us with insights on this? |
Hello, Is there any hope in having this patched by the dev team, anything else we can do? |
@jblazquez sorry for the ping but maybe there is something we can do about this :D ! |
Hi @IIPoliII, thanks for the ping. So if I understand the issue correctly, the problem is that the small HTTP range requests that the Riot patcher makes do not work well with lancache? Unfortunately, as explained on my original post when we switched to the new patcher, this is how downloads work now:
We rely on CDNs (and caches) being able to handle multipart HTTP range requests efficiently, because that is how we retrieve all of the chunks of data that we need (here is an article that I wrote around that time explaining a bit more how the patcher works). I'm not familiar with how lancache slicing works exactly, but in theory that should be a good approach: retrieving 1MB ranges of data around the requested bytes, then caching them for future requests that hit those same ranges (and eventually caching the full file). I'm pretty sure that's how Akamai's CDN works, for example. Like with any cache, the first requests for uncached objects will take longer as the cache needs to fill the data from the origin, but after that initial retrieval, other people should be experiencing fast speeds. Is that not the case? Sorry I can't be of much help. I don't know much about the internals of this caching system. |
@Sidesharks, you mentioned a prefill tool for Blizzard games. Can you give me some more details on that? How does the tool work? |
Hi Javier, thanks for taking the time to respond on this thread. I've done a good bit of digging into this info, and I can fill you in a bit more on why (at least from my understanding) that it is happening. Lancache is currently configured to use the 1MB(source) ranges that you suggested, and they work great for most CDNs that use range requests. Any range requests will be returned as expected with a Things don't work quite the same way when using multipart range requests. Instead of Nginx properly returning a From reading through the Riot client logs it looks like this is causing the client to freak out over getting an unexpected The behavior here is most certainly the fault of Nginx, and it can be corrected by disabling slicing all together. However for some CDNs like Battle.net that would be extremely undesirable as they pack their content into 256MB archives on their CDN, and for some games like World of Warcraft there are over a large number of archives, 1,234 at the time of writing. Hitting all of them even for a single byte would be using nearly 308GB just to be able to cache the actual download size of 97GB. As far as where the solution for this lies, I'm not really sure of it at the moment. However I hope that maybe with some more info we could work towards a solution for this. |
The prefill tool for Blizzard games is battlenet-lancache-prefill, and there is a Steam and Epic version as well. I also have a Riot one that I've been working on, but that's private on Github at the moment since its not complete. These tools are all functionally identical, and I would recommend that if you would like to look at the documentation for these tools to look at the Steam version, since it has the most up to date and detailed docs. Battle.net has a similar issue to the Riot issue where the initial uncached download via the Battle.net client will be extremely slow, below 10mbit/s at best. Once everything has been cached there is no issue at all, all of the range requests made by the client come back as expected. As far as why the Battle.net client has an issue I have never completely determined, however since there is no way to adjust how the client itself works with Lancache I decided to take the approach of writing my own client. BattlenetPrefill is simply a custom client that downloads the appropriate Battle.net manifests, builds out the list of requests that need to be made in order to download from their CDNs, and then downloads them in parallel as quickly as possible. Anything past downloading is skipped, so no validation, decompression, or writing to disk. Since all of those extraneous steps are skipped, BattlenetPrefill (as well as the other prefills) can pull from the CDNs faster than the actual client ever could. I've seen it tested as high as getting 5gbit/s over WAN. The intended workflow for BattlenetPrefill is to use it to prime your cache ahead of time, thus with all of the data cached end users will no longer suffer from the download stalling issue in the actual Battle.net client when installing a game. |
@tpill90, thanks for the detailed explanation! Yes, NGINX does have an issue where it will not honor multipart range requests on cache miss, and that is precisely why we have retries when we receive a The Riot patcher will retry multipart range requests up to 5 times, and if the server insists on returning a So unfortunately the issue remains in NGINX, and efficient multipart range requests will work only on cache hit. I think prefilling is going to be the solution here. We can't share our internal tools, which can be used for prefilling, but I think there are publicly available tools you can use quite easily. If you're not familiar with @moonshadow565 tools for processing Riot patcher manifests and bundles, you can find them here: https://github.com/moonshadow565/rman The
And then you can tell your prefilling tool or script to download those bundles from Note that I can't vouch for moonshadow's tools, but I believe they work well, and may be useful for your use case. Let me know if this helps. |
Hey Javier, Thanks for the comprehensive response. It's been incredibly useful and has confirmed what we had already expected. The crux is that it does appear that nginx's inability to deal with multiple-range requests is triggering this behaviour, and makes for a less-than-ideal experience for the end user. We always strive to make Lancache as transparent to the user as possible, so we're going to investigate rolling the patch you linked into the nginx build that Lancache uses by default. I believe this patch is the missing link we needed to support this behaviour, so thanks for pointing it out! @tpill90 has already done some sterling work on our end, and from preliminary testing it appears that the patched nginx does successfully return a 206 for a multi-range request, and also appears to work properly with the slicing module that we use in Lancache. We have a test candidate LAN party this weekend where this can be tested at a slightly larger scale to see if there are any regressions with any other CDN provider. We'll report back and let you know how it goes. |
I'm glad that very old patch still works! I'm curious to hear how the event this weekend goes. Please update this thread if you can :) |
@jblazquez A quick follow up question for you. Is there a max size for the bundles? I checked your blog post and I couldn't see anywhere you mentioned a size for the bundles, and from what I'm seeing from my testing it seems that they don't go over 8mb. I just wanted to know if there would be any edge cases I haven't seen so far. Thanks again! |
Hi all, We have seen similar issues and are looking for a solution. The lancache we have services a large number of clients. Thank you |
Sorry, didn’t get notified about this reply. Yes, the very maximum size of bundles is 75MB, but they rarely go above 16MB or so. They will probably get larger on average than now in the near future - so don’t hardcode an 8MB maximum - but 75MB is the max we support in code. |
Hi, Was this patch successful at the lanparty, and if so, is this something that can be implemented in a fix for lancache? |
@Lepidopterist Were you able to solve the problem with the nginx patch? How did the test go? |
Did some testing during an event, applied the git patch from the discussion in the nginx mailing list mentioned by @jblazquez ) in this commit, in my own repo Seems that the proxy function now replies with HTTP 206 and all the slice (sub)requests, The Riot Games client seems satisfied. But sadly this is without cache. If I enable the cache function it only replies with the first slice in the request. Even with the prefill tool @jblazquez mentioned, the issue remains with cached (sub)requests. I need to confirm that I'm in the right direction about this and ask about this in the nginx mailing list. Since i'm not so much into c++, we need to convince someone to patch this. What do others think about this? EDIT: also, I'm curious if @Lepidopterist experienced similiar behavior during testing at his event. |
Describe the issue you are having
There seems to be an issue with Valorant speeds on lancache. As referenced in issue #164 Valorant caching has been enabled for a while but seems to be working ineffectively.
Describe your setup?
Unbound DNS server towards lancache docker container
Are you running sniproxy
no
DNS Configuration
Visible here: https://github.com/33Fraise33/personal-ansible/tree/main/roles/unbound/tasks
The text was updated successfully, but these errors were encountered: