Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GODRIVER-2898 Standardized performance testing #1829

Merged
merged 11 commits into from
Oct 8, 2024

Conversation

prestonvasquez
Copy link
Collaborator

@prestonvasquez prestonvasquez commented Sep 25, 2024

GODRIVER-2898

Summary

Migrate the existing benchmark pattern from the harness model to the benchmark facility provided in the Go testing package. There are multiple benefits to this:

  • Iteration control: b.N will achieve accurate and consistent benchmark results without needing to use the manual iterations defined in the benchmarking specification. This will also typically require less iterations than those prescribed in the specifications while maintaining reliable profiling. While the driver mantras instruct us to favor consistency, there doesn’t appear to be a good argument for not relying on Go idioms here.
  • Built-in measurement and reporting: leverage the metrics already collected by the testing package: total time, iterations, allocated bytes per operation, allocations per operation, total memory allocations, total bytes allocated, etc.
  • Consistency: The harness pattern defies expectation for benchmarking. Developers of the Go Driver would naturally expect our performance testing to use the testing package.

Measuring performance requires us to run a benchmark and then collect the results and store them as test objects, which are incidentally defined by the poplar package. In order to run all benchmarking tests, collect the results, and write a perf.json file, we need to use the testing.Benchmark function:

func TestRunAllBenchmarks(t *testing.T) {
	cases := []struct {
		name      string
		benchmark func(*testing.B)
	}{ 
		// [...]
	}


results := []poplar.Test{}
	for _, case := range cases {
		results = append(results, testing.Benchmark(case.benchmark))
	}
}

Unfortunately, since there is no way for us to pass a *testing.T type to the benchmark, assertions made in the benchmark will not be thrown when running this test. To catch these, we fail the test if no iterations are run:

if result.N == 0 {
	return test, fmt.Errorf("benchmark failed to run")
}

Additionally, to debug locally developers should run the benchmark directly. To avoid the test from failing on local development when there is an issue, we only fail when the failOnErr flag is used:

go test -run "TestRunAllBenchmarks" --failOnErr

The ultimate goal of performance testing is to report significant “change points” to the team. This is accomplished by calculating the E-coefficient of inhomogeneity (h score) between perf.json sent between commits. The coefficient is calculated using the equation in the linked wiki article. This is notably a distance function, so the larger the H Score the more inhomogeneous the data. If the H Score is 0, then that means the distributions are the same. This value is always between [0, 1]. The server defaults to an H Score of 0.6 per PERF-3904 as this value is determined unlikely to yield false positives while capturing the maximum number of inhomogeneous cases.

Background & Motivation

Drivers should ensure any performance testing, including but not limited to the driver spec performance benchmarks:

  • uses a dedicated distro in evergreen (to avoid fluctuations in performance due to distro changes)
  • uses a patch-pinned server version for integration performance tests (to avoid fluctuations in performance due to server performance profile changes)
  • utilizes the performance analytics backend for change point detection via the perf.send command (the tooling uses sophisticated algorithms to detect real points of change in performance and minimize noise)
  • sets up actionable alerting based on performance results

@mongodb-drivers-pr-bot mongodb-drivers-pr-bot bot added dependencies Pull requests that update a dependency file priority-3-low Low Priority PR for Review labels Sep 25, 2024
Copy link
Contributor

API Change Report

No changes found!

@prestonvasquez prestonvasquez changed the title GODRIVER-2898 Standardized performance logging GODRIVER-2898 Standardized performance testing Sep 26, 2024
@prestonvasquez prestonvasquez marked this pull request as ready for review September 27, 2024 20:21
blink1073
blink1073 previously approved these changes Sep 28, 2024
Copy link
Member

@blink1073 blink1073 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

matthewdale
matthewdale previously approved these changes Oct 7, 2024
Copy link
Collaborator

@matthewdale matthewdale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! 👍

@prestonvasquez prestonvasquez merged commit 6057c7d into mongodb:master Oct 8, 2024
30 of 33 checks passed
@prestonvasquez prestonvasquez deleted the GODRIVER-2898 branch October 8, 2024 15:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file priority-3-low Low Priority PR for Review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants