Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory exploit mitigations #15179

Closed
kmcallister opened this issue Jun 25, 2014 · 16 comments
Closed

Memory exploit mitigations #15179

kmcallister opened this issue Jun 25, 2014 · 16 comments
Labels
A-security Area: Security related issues (example: address space layout randomization) C-tracking-issue Category: A tracking issue for an RFC or an unstable feature. metabug Issues about issues themselves ("bugs about bugs")

Comments

@kmcallister
Copy link
Contributor

This is tremendously open-ended, but at minimum we should implement the usual tricks from C compilers, such as

This will protect unsafe code, and will mitigate the impact of compiler bugs. Some of it will also protect buggy C code when it's linked with Rust.

The goal here isn't just to make these things possible but to have really painless toolchain support. In many cases the performance impact is insignificant and there's no reason not to compile with mitigations.

Beyond the established techniques, there are a lot of interesting research ideas we could implement. See for example Prof. Michael Franz's talk at Mozilla on compiler-generated software diversity.

@kmcallister
Copy link
Contributor Author

LLVM already supports stack canaries, so I would start with #15180, and then try the various ssp attributes to see if they produce correct code with effective mitigations.

I think we should have a high-level compiler flag which enables a reasonable set of mitigations. Perhaps --harden-level=, and -H similar to -O?

@thestinger
Copy link
Contributor

Rust does already support full ASLR / full RELRO via -C link-args="-pie -Wl,-z,relro,-z,now".

In order for ASLR to be useful, you need to prevent leaking a pointer to any symbol as it gives away the randomized base, and Rust permits this in safe code. To be truly useful as a statistical defence rather than security through obscurity, full RELRO is required, and at the moment that means forcing on immediate binding but it might not in the future.

Chromium also does ASLR in userspace for memory allocations, because the OS ASLR is usually very weak. For example, Linux without PaX patches will still just lay out each mmap precisely after the last one as it only ever randomizes the starting point. PaX and Chromium will add a small random gap instead, so it causes memory fragmentation for the sake of improved randomization. This doesn't play well with jemalloc, because it wants full control over how allocations are laid out.

I'm not sure how much of this is sensible to pursue for Rust. The stack canaries cause a 1-10% performance hit and a 1-15% code size increase. They're aimed at stopping vulnerabilities due to C strings more than anything else and include a \0 to stop those functions from going past the end of the stack frame.

@kmcallister
Copy link
Contributor Author

Rust does already support full ASLR / full RELRO via -C link-args="-pie -Wl,-z,relro,-z,now"

Cool. That's quite a mouthful compared to -H though. :) I want to make "more secure and somewhat slower" a preference that users can easily express, similar to how -O expresses "faster to run and slower to compile and harder to debug" without getting into the details of every optimization pass.

In order for ASLR to be useful, you need to prevent leaking a pointer to any symbol as it gives away the randomized base, and Rust permits this in safe code. To be truly useful as a statistical defense rather than security through obscurity, full RELRO is required

Sure, Rust allows creating such a leak in safe code, but it's not the most common thing to do, and the attacker has to find a way to access it in the relevant attack scenario (which could be remote or have a limit on attempts), and even if they do find the leak, you've forced them to spend resources doing so, which is what these mitigations are all about.

I don't think the question of what things are possible in safe code is very relevant, anyway. For any code where these mitigations come into play, the memory safety system has already failed, or doesn't try to provide any guarantees (unsafe and foreign code). The safe dialect shouldn't and doesn't need to prevent you from doing anything that might decrease the effectiveness of exploit mitigations; they are fundamentally an unsound, defense-in-depth thing.

Of course it would be easy to get such a leak in a system which uses unsafe to sandbox attacker-controlled code. But in that case you're already trusting the typechecker to a huge degree, and it seems moot to complain that it can't also prevent loopholes in a last-ditch countermeasure.

There's no actual sharp line between "security through obscurity" and "statistical defense"; it's all about how much the attacker spends versus how much you spend. And ASLR is very cheap on AMD64 (see my benchmarks). It's probably cheap on ARM too, although I haven't checked. The question of which mitigations are enabled at which --harden-level will certainly be platform specific.

and at the moment that means forcing on immediate binding but it might not in the future.

Interesting; how do you do RELRO without immediate binding?

Also I don't understand why RELRO is a prerequisite for useful ASLR. In my understanding, ASLR is about making it hard to find out where things are. RELRO is about preventing writes to certain executable pages (inter alia) even knowing where those pages are. So if you have some RWX pages, but it's hard for the attacker to find them, that's still a win.

Chromium also does ASLR in userspace for memory allocations, because the OS ASLR is usually very weak. For example, Linux without PaX patches will still just lay out each mmap precisely after the last one as it only ever randomizes the starting point.

Dang, I didn't realize Linux's ASLR is that bad >_>. But static exec ASLR is still a big win. Programs that aren't doing dynamic code loading / generation will have all their executable pages randomized, which is important for preventing ROP.

Anyway with this ticket I mostly had compiler features in mind. Hardening allocator libraries seems pretty separate although I would certainly be happy to see that as well. There is plenty of allocator hardening you can do beyond randomization, as well.

The stack canaries cause a 1-10% performance hit and a 1-15% code size increase.

Ubuntu builds all packages with stack canaries by default. They do PIE for ASLR on certain high-security packages and would do it for everything on AMD64 if not for issues of compatibility with existing code. Debian also hardens many packages. Mosh built from source uses whatever hardening is supported by the platform.

My point is that this stuff is even today becoming the norm, and if we don't support it, that's a serious regression from C to unsafe-Rust, or even Rust that links C libraries (recall that a non-PIE Rust binary will provide ROP gadgets for an exploit in a perfectly hardened C library). Yes, there is sometimes a performance penalty, and users can decide how they feel about that, much as they decide whether to use -O.

They're aimed at stopping vulnerabilities due to C strings more than anything else and include a \0 to stop those functions from going past the end of the stack frame.

I don't think that's fair or accurate. Sure, an AMD64 Linux glibc canary contains one NULL byte, it might as well. But it also contains seven random bytes. Checking those bytes before return will catch attempts to overflow stack buffers that aren't strings. I'm confident I can find many examples of this mitigation being effective in practice.

Actually I suspect that NULL byte is there, as the LSB i.e. first in memory, to stop C string functions from reading the canary value and leaking it to the attacker, who can then include it in a stack-smash attempt that might have nothing to do with strings and can happen in a completely different function.

For anyone interested, here's code to print the stack canary on AMD64 Linux:

#![feature(asm)]
fn main() {
    let canary: u64;
    unsafe {
        asm!("movq %fs:0x28, $0" : "=r"(canary))
    };
    println!("{:016x}", canary);
}

@thestinger
Copy link
Contributor

Also I don't understand why RELRO is a prerequisite for useful ASLR. In my understanding, ASLR is about making it hard to find out where things are. RELRO is about preventing writes to certain executable pages (inter alia) even knowing where those pages are. So if you have some RWX pages, but it's hard for the attacker to find them, that's still a win.

RELRO makes tables of function pointers (GOT) read-only. In a binary not compiled as a position independent executable, these are in predictable locations. A position independent executable makes it harder to exploit this in many cases, but it's still a significant weakness because it allows for control of the program's executable via pointers known to aim somewhere into that writeable data.

https://isisblogs.poly.edu/2011/06/01/relro-relocation-read-only/

@kmcallister
Copy link
Contributor Author

harder to exploit this in many cases, but it's still a significant weakness

Yeah, that's basically the name of the game here. It seems really unfair to dismiss ASLR as "security through obscurity" just because it can be worked around sometimes.

@thestinger
Copy link
Contributor

Yeah, that's basically the name of the game here. It seems really unfair to dismiss ASLR as "security through obscurity" just because it can be worked around sometimes.

I guess that's true, but it's a lot more valuable when combined with other mitigations like RELRO and when the application / library code is written or audited with info leaks in mind. An example of a feature interacting poorly with this is repr (aka Poly, {:?} in format strings), as it prints all of the raw pointer addresses in private fields and it's easy to use it without realizing it's going to do that. On the positive side, that's hidden away in a libdebug crate now.

@kmcallister
Copy link
Contributor Author

I hadn't thought about the fact that {:?} format strings expose that information. It would make sense if -H also enables certain lints, and that could include a warning about {:?}. We probably want such a lint anyway.

We could introduce something kind of like the stability attributes but meaning "this item shouldn't be used in production code" even when the interface is stable. You'd enable this lint on production builds, and you could still use {:?} within code gated by #[cfg(debug)].

@kmcallister
Copy link
Contributor Author

This is rust-lang/rfcs#145.

@thestinger
Copy link
Contributor

AFAIK PIE is the only reason for using LLVM's pic relocation model for an executable, so it can simply be enabled if the relocation model is pic. That's already the default model, so it will be enabled by default everywhere. It may make sense to use dynamic-no-pic by default on architectures like i686 where position independent code is expensive, but that's a separate issue. See #16340.

@thestinger
Copy link
Contributor

#16514 covers providing full ASLR on Windows, as is already the case on Linux

@thestinger
Copy link
Contributor

#16533 covers enabling DEP (NX bit) support for all Windows executables

@thestinger thestinger added the metabug Issues about issues themselves ("bugs about bugs") label Sep 16, 2014
@thestinger
Copy link
Contributor

#17161 disabled ASLR on Windows...

@steveklabnik
Copy link
Member

Triage: still a hodge-podge of some things, but there's more that can be done.

@steveklabnik
Copy link
Member

Triage; same as 2015

@Mark-Simulacrum Mark-Simulacrum added the C-tracking-issue Category: A tracking issue for an RFC or an unstable feature. label Jul 21, 2017
@steveklabnik
Copy link
Member

Triage: clearly this tracking issue isn't helpful; we have some interest in a working group that would work on this stuff, and they'll track this in their own ways. Closing.

@phra
Copy link

phra commented May 20, 2019

is this tracked somewhere now?

workingjubilee pushed a commit to workingjubilee/rustc that referenced this issue Sep 12, 2021
LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This patch adds a
rustc option `-Z stack-protector={none,basic,strong,all}` which controls the use
of these attributes. This gives rustc the same stack smash protection support as
clang offers through options `-fno-stack-protector`, `-fstack-protector`,
`-fstack-protector-strong`, and `-fstack-protector-all`. The protection this can
offer is demonstrated in test/ui/abi/stack-protector.rs. This fills a gap in the
current list of rustc exploit
mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in rust-lang#15179.

Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.

Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This patch follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.

Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.

LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see rust-lang#26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (rust-lang#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.

The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.

Reviewed-by: Nikita Popov <[email protected]>

Extra commits during review:

- [address-review] make the stack-protector option unstable

- [address-review] reduce detail level of stack-protector option help text

- [address-review] correct grammar in comment

- [address-review] use compiler flag to avoid merging functions in test

- [address-review] specify min LLVM version in fortanix stack-protector test

  Only for Fortanix test, since this target specifically requests the
  `--x86-experimental-lvi-inline-asm-hardening` flag.

- [address-review] specify required LLVM components in stack-protector tests

- move stack protector option enum closer to other similar option enums

- rustc_interface/tests: sort debug option list in tracking hash test

- add an explicit `none` stack-protector option

Revert "set LLVM requirements for all stack protector support test revisions"

This reverts commit a49b74f92a4e7d701d6f6cf63d207a8aff2e0f68.
workingjubilee added a commit to workingjubilee/rustc that referenced this issue Sep 12, 2021
add codegen option for using LLVM stack smash protection

LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This PR adds a codegen
option `-C stack-protector={basic,strong,all}` which controls the use of these
attributes. This gives rustc the same stack smash protection support as clang
offers through options `-fstack-protector`, `-fstack-protector-strong`, and
`-fstack-protector-all`. The protection this can offer is demonstrated in
test/ui/abi/stack-protector.rs. This fills a gap in the current list of rustc
exploit mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in rust-lang#15179.

Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.

Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This PR follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.

Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.

LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see rust-lang#26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (rust-lang#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.

The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.
Manishearth added a commit to Manishearth/rust that referenced this issue Sep 12, 2021
add codegen option for using LLVM stack smash protection

LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This PR adds a codegen
option `-C stack-protector={basic,strong,all}` which controls the use of these
attributes. This gives rustc the same stack smash protection support as clang
offers through options `-fstack-protector`, `-fstack-protector-strong`, and
`-fstack-protector-all`. The protection this can offer is demonstrated in
test/ui/abi/stack-protector.rs. This fills a gap in the current list of rustc
exploit mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in rust-lang#15179.

Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.

Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This PR follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.

Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.

LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see rust-lang#26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (rust-lang#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.

The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.
bbjornse added a commit to bbjornse/rust that referenced this issue Nov 22, 2021
LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This patch adds a
rustc option `-Z stack-protector={none,basic,strong,all}` which controls the use
of these attributes. This gives rustc the same stack smash protection support as
clang offers through options `-fno-stack-protector`, `-fstack-protector`,
`-fstack-protector-strong`, and `-fstack-protector-all`. The protection this can
offer is demonstrated in test/ui/abi/stack-protector.rs. This fills a gap in the
current list of rustc exploit
mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in rust-lang#15179.

Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.

Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This patch follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.

Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.

LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see rust-lang#26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (rust-lang#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.

The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.

Reviewed-by: Nikita Popov <[email protected]>

Extra commits during review:

- [address-review] make the stack-protector option unstable

- [address-review] reduce detail level of stack-protector option help text

- [address-review] correct grammar in comment

- [address-review] use compiler flag to avoid merging functions in test

- [address-review] specify min LLVM version in fortanix stack-protector test

  Only for Fortanix test, since this target specifically requests the
  `--x86-experimental-lvi-inline-asm-hardening` flag.

- [address-review] specify required LLVM components in stack-protector tests

- move stack protector option enum closer to other similar option enums

- rustc_interface/tests: sort debug option list in tracking hash test

- add an explicit `none` stack-protector option

Revert "set LLVM requirements for all stack protector support test revisions"

This reverts commit a49b74f92a4e7d701d6f6cf63d207a8aff2e0f68.
bors added a commit to rust-lang-ci/rust that referenced this issue Nov 23, 2021
add codegen option for using LLVM stack smash protection

LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This PR adds a codegen
option `-C stack-protector={basic,strong,all}` which controls the use of these
attributes. This gives rustc the same stack smash protection support as clang
offers through options `-fstack-protector`, `-fstack-protector-strong`, and
`-fstack-protector-all`. The protection this can offer is demonstrated in
test/ui/abi/stack-protector.rs. This fills a gap in the current list of rustc
exploit mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in rust-lang#15179.

Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.

Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This PR follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.

Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.

LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see rust-lang#26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (rust-lang#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.

The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.
bors added a commit to rust-lang-ci/rust that referenced this issue Aug 21, 2023
…rams-are-ignored, r=HKalbasi

the "add missing members" assists: implemented substitution of default values of const params

To achieve this, I've made `hir::ConstParamData` store the default values
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-security Area: Security related issues (example: address space layout randomization) C-tracking-issue Category: A tracking issue for an RFC or an unstable feature. metabug Issues about issues themselves ("bugs about bugs")
Projects
None yet
Development

No branches or pull requests

5 participants