-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential vulnerability: overflowing and truncating casts #3440
Comments
@lovasoa you have issues disabled on I'll leave it up to you whether to file your own RUSTSEC advisory. |
Thank you very much @abonander ! I just activated issues in sqlx-oldapi, and will look into this. |
I will link the talk when it is available. |
There's a version available on the wayback machine https://web.archive.org/web/20240814175011/https://media.defcon.org/DEF%20CON%2032/DEF%20CON%2032%20presentations/DEF%20CON%2032%20-%20Paul%20Gerste%20-%20SQL%20Injection%20Isn't%20Dead%20Smuggling%20Queries%20at%20the%20Protocol%20Level.pdf |
Hi, tl;dr: Is this attack PostgreSQL-only or do you think MySQL/SQLite could be also affected ? From the screenshot, this attack seems to only impact the PostgreSQL backend. However it is likely the other backends are written in a similar manner and hence might also be vulnerable (though in different ways since the binary protocol is obviously different). I would say SQLite backend is most likely not affected since it doesn't use a binary protocol as far as I am aware. |
The MySQL driver has some suspect casts but the actual packet length encoding is sound from what I've seen. However, because the MySQL protocol is more contextual than the Postgres protocol, there may be other length-encoded values that can overflow and cause misinterpretation. I haven't gotten to the I don't have a solid answer for SQLite yet. Yes, it doesn't have a binary protocol, but we do have to pass lengths for things like blobs when we bind them and those may have incorrect casts since C APIs tend to prefer signed integers. |
Update: the MySQL driver appears to be mostly fine, we already had strong checks on the lengths of encoded packets thanks to the support for packet splitting. The rest of the audit there was just adding sanity checks. The SQLite driver had one cast that concerned me when we're about to call If The SQLite3 API indeed loves to sprinkle |
There is a current issue in `sqlx` being investigated: launchbadge/sqlx#3440 In Engine, we already have request size limits so this isn't an issue, but no such limits exist in the connectors. This PR adds a 100MB limit to connector requests, in order to avoid problems if bad actors find ways to query NDCs directly.
Update deux: CI on #3441 is passing, I'd like to at least try to reproduce the exploits on the base commit to have a good regression test. |
I have created a regression test based on
|
I'm actually oddly proud of this because I improved upon the slides by figuring out a more versatile method of padding the payload in a way that won't break the connection, at least from a protocol perspective. As it turns out, the Postgres backend will read and discard This means that, not only can we use them to pad the payload, but we can also use a final However, ironically enough, the connection will still break because the injected sqlx/sqlx-postgres/src/connection/mod.rs Line 106 in 6f29056
Which will then hang in this loop as it's going to expect sqlx/sqlx-postgres/src/connection/mod.rs Lines 82 to 88 in 6f29056
|
As it turns out, what I thought might be a similar vulnerability in the SQLite driver turned out to be a non-issue because I had missed this check: sqlx/sqlx-sqlite/src/statement/virtual.rs Lines 59 to 64 in 6f29056
Curiously, it appears the |
As expected, MySQL doesn't appear to be exploitable in this fashion. It actually has pretty tight limits on packet sizes by default and is hardcoded not to accept any packet larger than 1 GiB: https://dev.mysql.com/doc/refman/8.4/en/packet-too-large.html |
Final update: 0.8.1 has been released with #3441: https://github.com/launchbadge/sqlx/blob/main/CHANGELOG.md#081---2024-08-23 |
Thanks for the reactive fix! I reported the problem on Discord and I also forwarded that to sea-orm which is based on sqlx and used in many businesses. |
For those who are facing the breaking changes of https://github.com/cloudwalk/stratus/pull/1673/files Both Posting here just in case I save someones time :). |
Context
User "Sytten" on Discord brought to our attention the following presentation from this year's DEFCON: https://media.defcon.org/DEF%20CON%2032/DEF%20CON%2032%20presentations/DEF%20CON%2032%20-%20Paul%20Gerste%20-%20SQL%20Injection%20Isn't%20Dead%20Smuggling%20Queries%20at%20the%20Protocol%20Level.pdf
(Note: if there's a public page or blog post for this, or one is posted in the future, please let me know so I can include it for posterity.)
Essentially, encoding a value larger than 4GiB can cause the length prefix in the protocol to overflow, causing the server to interpret the rest of the string as binary protocol commands (or other data? it's not clear):
It appears SQLx does perform truncating casts in a way that could be problematic, for example:
sqlx/sqlx-postgres/src/arguments.rs
Line 163 in 6f29056
This code has existed essentially since the beginning, so it is reasonable to assume that all published versions are affected.
It's hard to glean just from the slides exactly how this may be exploited, and in any case it requires a malicious input at least 4 GiB long.
Mitigation
As always, you should make sure your application is validating untrustworthy user input. Reject any input over 4 GiB, or any input that could encode to a string longer than 4 GiB. Dynamically built queries are also potentially problematic if it pushes the message size over this 4 GiB bound.
Encode::size_hint()
can be used for sanity checks, but do not assume that the size returned is accurate. For example, theJson<T>
andText<T>
adapters have no reasonable way to predict or estimate the final encoded size, so they just returnsize_of::<T>()
instead.For web application backends, consider adding some middleware that limits the size of request bodies by default.
Resolution
I have started work on a branch that adds
#[deny]
directives for the following Clippy lints:cast_possible_truncation
cast_possible_wrap
cast_sign_loss
and I'm auditing the code that they flag. This is the same approach being used by Diesel: diesel-rs/diesel#4170
In the process I realized that our CI wasn't running a lot of our unit tests, so I'm fixing that as well.
After the fix, attempting to encode a value larger than is allowed in a given binary protocol will return an error instead of silently truncating it.
I will also be filing a RUSTSEC advisory.
The text was updated successfully, but these errors were encountered: