Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SplitHTTP: More range options, change defaults, enforce maxUploadSize, fix querystring behavior #3603

Merged
merged 3 commits into from
Jul 29, 2024

Conversation

mmmray
Copy link
Collaborator

@mmmray mmmray commented Jul 27, 2024

Follow up from #3592

  • maxUploadSize and maxConcurrentUploads can now be ranges on the client
  • maxUploadSize is now enforced on the server
  • the default of maxUploadSize is 2MB on the server, and 1MB on the client
  • the default of maxConcurrentUploads is 200 on the server, and 100 on the client
  • ranges on the server are treated as a single number. if server is configured as "1-2", server will enforce 2
  • querystrings in path are now handled correctly

@RPRX
Copy link
Member

RPRX commented Jul 27, 2024

the server does not enforce maxUploadSize at all. in the original design it was assumed the CDN already has (too many) protections around this.

我记得 cloudflare 免费版支持上传 100MB,但我们的服务器通常承载不了 100MB*100,所以服务端也应当加上限制

@RPRX
Copy link
Member

RPRX commented Jul 27, 2024

其实这么想的话即使是 1MB*100 问题也挺大的,多几个 stream 就能把服务端打爆了。。。

@Fangliding
Copy link
Member

服务端buffer满了直接关掉整个连接就行了 剩下的就是普通的七层ddos了

@RPRX
Copy link
Member

RPRX commented Jul 27, 2024

对了你可以开一个 PR 把 path 后的参数移到新的 path 后吗,打算合并这些 PR 后先发个 v1.8.22,然后再加 multiplex control

@mmmray
Copy link
Collaborator Author

mmmray commented Jul 27, 2024

By the way, can you open a PR and move the parameters after the path to the new path? I plan to merge these PRs and publish v1.8.22 first, and then add multiplex control.

translator did not catch this, do you want to make these things part of the querystring like ed?

@mmmray mmmray changed the title SplitHTTP client: maxUploadSize allows for a random range SplitHTTP: More random ranges, change defaults, enforce maxUploadSize on server Jul 27, 2024
@mmmray
Copy link
Collaborator Author

mmmray commented Jul 27, 2024

it was more convenient to make all the pending changes in the same PR. I updated the PR description and title to reflect this.

I'm starting to loose track again of what other changes are needed. Let's see:

  • enforcement of minUploadInterval on the server (I am not sure this can be done easily)
  • the path change (not sure what it means)

@Fangliding

When the server buffer is full, just close the entire connection. The rest is ordinary seven-layer DDoS.

this is how it works since 1.8.16, but before this PR the buffer was only constrained by packet count, not by total size in bytes.

@mmmray mmmray changed the title SplitHTTP: More random ranges, change defaults, enforce maxUploadSize on server SplitHTTP: More range options, change defaults, enforce maxUploadSize on server Jul 27, 2024
@@ -75,7 +75,7 @@ func (h *requestHandler) upsertSession(sessionId string) *httpSession {
}

s := &httpSession{
uploadQueue: NewUploadQueue(int(2 * h.ln.config.GetNormalizedMaxConcurrentUploads())),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when changing it, i realized the server is already allowing for 2 * maxConcurrentUploads implicitly. I changed it so that the limit can be set more directly, and doubled the default instead. I think this 2 * was only supposed to be a temporary hack at some point.

@RPRX
Copy link
Member

RPRX commented Jul 28, 2024

enforcement of minUploadInterval on the server (I am not sure this can be done easily)

可以先不实现

the path change (not sure what it means)

比如说, 现在 /path?query 会变成 /path?query/uuid,我希望它变成 /path/uuid?query

完成这个后我会合并这个 PR 然后发 v1.8.22,在此之前大家不要往 main push 新的 commit

@mmmray mmmray changed the title SplitHTTP: More range options, change defaults, enforce maxUploadSize on server SplitHTTP: More range options, change defaults, enforce maxUploadSize, fix querystring behavior Jul 28, 2024
@mmmray
Copy link
Collaborator Author

mmmray commented Jul 28, 2024

done

@RPRX RPRX merged commit 59f6685 into XTLS:main Jul 29, 2024
36 checks passed
@RPRX
Copy link
Member

RPRX commented Jul 29, 2024

再合并一些 PR 后我会发个新版,此外下下个版本需要给服务端加两个选项:

  1. noSSEHeader,因为 SSE 的流量一般不大,我猜 CDN 可以据此来限速
  2. lenOK,因为现在服务端 response 头的长度应该是完全固定的,用 WireShark 看一下,我猜据此来识别 SplitHTTP 会很简单

@RPRX
Copy link
Member

RPRX commented Jul 29, 2024

不过 noSSEHeader 太简单了干脆现在就加吧,我写一下

@Fangliding
Copy link
Member

再合并一些 PR 后我会发个新版,此外下下个版本需要给服务端加两个选项:

  1. noSSEHeader,因为 SSE 的流量一般不大,我猜 CDN 可以据此来限速
  2. lenOK,因为现在服务端 response 头的长度应该是完全固定的,用 WireShark 看一下,我猜据此来识别 SplitHTTP 会很简单

oklen直接写死在一个随机范围好了 做选项好像意义不大的样子((

RPRX added a commit that referenced this pull request Jul 29, 2024
@RPRX
Copy link
Member

RPRX commented Jul 29, 2024

卧槽写成 client 了,风扇帮我 force 掉

oklen直接写死在一个随机范围好了 做选项好像意义不大的样子((

不允许自定义的话始终是固定靶,我想的是允许自定义且有个默认的随机范围,客户端要求 v1.8.21+,总之这个问题还需要些研究

}
} else {
return RandRangeConfig{
From: 100,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default value is changed from 10 to 100, really?

Copy link
Collaborator Author

@mmmray mmmray Jul 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the justification for this is that minUploadInterval=30 should prevent this from being reached in practice. #3592 (comment) I also don't see this limit being reached at all in my tests (number of connections/streams is lower than 20 at all times), so I am also not convinced it needed to be raised by that much (or at all)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

你要这么想,本来我是想删掉这个选项的,不过它确实对服务端还有些用

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also don't see this limit being reached at all in my tests (number of connections/streams is lower than 20 at all times), so I am also not convinced it needed to be raised by that much (or at all)

如果为了打游戏调成了 minUploadInterval=10,number of concurrent streams 应该会变多,I guess

mmmray pushed a commit that referenced this pull request Jul 29, 2024
@mmmray
Copy link
Collaborator Author

mmmray commented Jul 29, 2024

卧槽写成 client 了,风扇帮我 force 掉

fixed

@mmmray mmmray deleted the max-upload-size-range branch July 29, 2024 06:26
@RPRX
Copy link
Member

RPRX commented Jul 29, 2024

卧槽写成 client 了,风扇帮我 force 掉

fixed

与此同时我也在 force-push,不过没关系,反正打算写成 server

RPRX added a commit that referenced this pull request Jul 29, 2024
@Fangliding
Copy link
Member

左找右找没找到区别在哪

然后发现是commit message

@RPRX
Copy link
Member

RPRX commented Jul 29, 2024

@Fangliding 文档有三处需要修改:

  1. 配置示例中应为 "maxConcurrentUploads": 100,
  2. minUploadIntervalMs 左右的 `` 打成了 ''
  3. 也可以是字符串 "30-60" 的形式 改为 10-50,我会在 release notes 中建议打游戏的设成 10,虽然 CDN 线路打游戏。。。

话说如果客户端正满速上传然而 CDN 不小心丢了一个 POST,服务端 200MB 会不会 OOM

@mmmray
Copy link
Collaborator Author

mmmray commented Jul 29, 2024

By the way, if the client is uploading at full speed but the CDN accidentally loses a POST, will the server 200MB cause OOM?

Unless the connection from the client is HTTP/1.1, the client will see a faulty response and should terminate the connection entirely. With HTTP/1.1, the request pipelining will behave this way: Piles up 200 * 1MB chunks maximum, or the client will terminate the connection because upload gets stuck and the tunneled application times out.

@RPRX
Copy link
Member

RPRX commented Jul 29, 2024

v1.8.22 已发布,正常情况下 CDN 不会丢一个 POST,看看下个版本要不要把 maxConcurrentUploads 在两端的默认值都改为 50

@RPRX
Copy link
Member

RPRX commented Jul 29, 2024

lenOK,因为现在服务端 response 头的长度应该是完全固定的,用 WireShark 看一下,我猜据此来识别 SplitHTTP 会很简单

@mmmray 麻烦研究一下,写一下 lenOK

@Fangliding 文档 noSSEHeader 需标注仅服务端;下面“服务端行为”写成“客户端”了;四个 transports 的 headers 需标注仅客户端

@ll11l1lIllIl1lll
Copy link
Contributor

ll11l1lIllIl1lll commented Jul 29, 2024

2. lenOK,因为现在服务端 response 头的长度应该是完全固定的,用 WireShark 看一下,~我猜据此来识别 SplitHTTP 会很简单~

给 response 塞入十万字的 cookie header 怎么样,之后 (s∞n) 可以改成 early data

mmmray added a commit to mmmray/Xray-core that referenced this pull request Aug 16, 2024
Since XTLS#3603 and
XTLS#3611, iperf and speedtest are
triggering "too large upload" by the server. This is because v2ray's
MultiBuffer pipe can actually return data larger than the configured
size limit.

I'm surprised nobody noticed it so far. In principle, any heavy upload
could disrupt the entire connection. In its infinite wisdom,
speedtest.net hides such errors and only shows low upload instead. I
only noticed this issue myself when inspecting server logs.
mmmray added a commit to mmmray/Xray-core that referenced this pull request Aug 16, 2024
Since XTLS#3603 and
XTLS#3611, iperf and speedtest are
triggering "too large upload" by the server. This is because v2ray's
MultiBuffer pipe can actually return data larger than the configured
size limit.

I'm surprised nobody noticed it so far. In principle, any heavy upload
could disrupt the entire connection. In its infinite wisdom,
speedtest.net hides such errors and only shows low upload instead. I
only noticed this issue myself when inspecting server logs.
mmmray added a commit to mmmray/Xray-core that referenced this pull request Aug 16, 2024
Since XTLS#3603 and
XTLS#3611, iperf and speedtest are
triggering "too large upload" by the server. This is because v2ray's
MultiBuffer pipe can actually return data larger than the configured
size limit.

I'm surprised nobody noticed it so far. In principle, any heavy upload
could disrupt the entire connection. In its infinite wisdom,
speedtest.net hides such errors and only shows low upload instead. I
only noticed this issue myself when inspecting server logs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants