Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build script fingerprint of registry dependencies is not tracked in fingerprint. #6733

Open
ehuss opened this issue Mar 10, 2019 · 2 comments
Labels
A-build-scripts Area: build.rs scripts A-rebuild-detection Area: rebuild detection and fingerprinting C-bug Category: bug S-triage Status: This issue is waiting on initial triage.

Comments

@ehuss
Copy link
Contributor

ehuss commented Mar 10, 2019

The fingerprint of a build script is not included for dependencies. #6720 added this tracking, but because it requires mtimes to work correctly, #6734 removed it for registry dependencies. This is because dependencies are often cached in Docker images, and Docker zeros out the nanoseconds portion of the mtime when the image is saved.

This only affects the scenario when two separate commands are run, such as cargo build then cargo test. The first command will pick up the changes, but the second will not.

There are a few ways that I can think of that this would occur: changing an environment variable tracked by rerun-if-env-changed, a system library changes that is tracked by rerun-if-changed, or adding a build override after already building.

I cannot think of an easy way to fix this. I'd rather not try to change the mtime resolution in the fingerprint. I think the ideal solution would be: instead of tracking the invoked.timestamp, include the same fingerprint information calculated by build_script_local_fingerprints when computing dependencies, and use a hybrid mtime/content hash for rerun-if-changed files.

Below are some tests that demonstrate this behavior. Both of these will fail on the last step if #6734 is merged.

#[test]
fn rerun_build_script_dep() {
    Package::new("bar", "0.1.0")
        .file(
            "build.rs",
            r#"
            fn main() {
                println!("cargo:rerun-if-env-changed=MY_TEST_VAR");
            }
        "#,
        )
        .file("src/lib.rs", "")
        .publish();

    let p = project()
        .file(
            "Cargo.toml",
            r#"
            [package]
            name = "foo"
            version = "0.1.0"
            [dependencies]
            bar = "0.1"
        "#,
        )
        .file("src/lib.rs", "")
        .build();

    p.cargo("build").env("MY_TEST_VAR", "1").run();
    p.cargo("test --lib").env("MY_TEST_VAR", "1").run();
    p.cargo("build -vv")
        .env("MY_TEST_VAR", "1")
        .with_stderr_contains("[FRESH] bar v0.1.0")
        .with_stderr_contains("[FRESH] foo v0.1.0 [..]")
        .run();
    p.cargo("build -vv")
        .env("MY_TEST_VAR", "2")
        .with_stderr_contains("[COMPILING] bar v0.1.0")
        .with_stderr_contains("[COMPILING] foo v0.1.0 [..]")
        .run();
    p.cargo("test -vv --lib")
        .env("MY_TEST_VAR", "2")
        .with_stderr_contains("[FRESH] bar v0.1.0")
        .with_stderr_contains("[COMPILING] foo v0.1.0 [..]")
        .run();
}

#[test]
fn changing_dep_override_invalidates() {
    let target = rustc_host();

    Package::new("dep1", "0.1.0")
        .file(
            "Cargo.toml",
            r#"
                [project]
                name = "dep1"
                version = "0.1.0"
                links = "asdf"
            "#,
        )
        .file("build.rs", "fn main() {}")
        .file("src/lib.rs", "pub fn f() -> bool { cfg!(trigger) }")
        .publish();

    let p = project()
        .file(
            "Cargo.toml",
            r#"
                [project]
                name = "foo"
                version = "0.5.0"
                [dependencies]
                dep1 = "0.1"
            "#,
        )
        .file(
            "src/lib.rs",
            r#"
                extern crate dep1;
                #[test]
                fn t1() {
                    assert_eq!(dep1::f(), true, "trigger not set");
                }
            "#,
        )
        .build();

    p.cargo("build").run();
    p.cargo("test --lib -vv")
        .with_stderr_contains("[FRESH] dep1 v0.1.0")
        .with_stdout_contains("[..]trigger not set[..]")
        .with_status(101)
        .run();

    p.change_file(
        ".cargo/config",
        &format!(
            r#"
                [target.{}.asdf]
                rustc-cfg = ["trigger"]
            "#,
            target
        ),
    );

    p.cargo("build -vv")
        .with_stderr_contains("[COMPILING] dep1 v0.1.0")
        .with_stderr_contains("[COMPILING] foo [..]")
        .run();
    p.cargo("test --lib -vv")
        .with_stderr_contains("[FRESH] dep1")
        .with_stderr_contains("[COMPILING] foo [..]")
        .run();
}
@ehuss ehuss added C-bug Category: bug A-rebuild-detection Area: rebuild detection and fingerprinting A-build-scripts Area: build.rs scripts labels Mar 10, 2019
@shepmaster
Copy link
Member

shepmaster commented Mar 11, 2019

dependencies are often cached in Docker images

Is it actually a frequent occurrence? I know the playground does it and is a big public face of this, but are there other folk?

Docker zeros out the nanoseconds portion of the mtime when the image is saved.

Back in the long ago, we actually scraped through Cargo's files, resetting the nanoseconds; I don't suppose any such hackery would make sense still, or a similar form in a more supported manner? For example, CARGO_NANOSECOND_GRANULARITY=false.

@ehuss
Copy link
Contributor Author

ehuss commented Mar 11, 2019

"Often" is an overstatement, but I do see people here and on forums who at least try to cache deps in docker images.

There have been a number of problems with mtimes in general. I'd like to see some kind of alternate change-tracking mode that makes it more reliable, but it's not clear exactly what that would look like.

@epage epage added the S-triage Status: This issue is waiting on initial triage. label Oct 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-build-scripts Area: build.rs scripts A-rebuild-detection Area: rebuild detection and fingerprinting C-bug Category: bug S-triage Status: This issue is waiting on initial triage.
Projects
None yet
Development

No branches or pull requests

3 participants