Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
99174: sql: fix circular dependencies in views r=rharding6373 a=rharding6373

This change fixes node crashes that could happen due to stack overflow if views were created with circular dependencies.

Fixes: #98999
Epic: none
Co-authored-by: [email protected]

Release note (bug fix): If views are created with circular dependencies, CRDB returns the error "cyclic view dependency for relation" instead of crashing the node. This bug is present since at least 21.1.

99624: testutils: add infrastructure for reusable test fixtures r=RaduBerinde a=RaduBerinde

Certain storage-related tests/benchmarks generate fixtures and attempt to reuse them across invocations. This is important because fixtures can be large and slow to generate; but more importantly the generation is non-deterministic and we want to use the exact same fixture when comparing benchmark data.

Currently the tests achieve this by using a `.gitignore`d subdirectory inside the source tree. This does not work with bazel (which executes the test in a sandbox).

This commit addresses the issue by adding new test infrastructure for reusable fixtures. We use the user's `.cache` directory instead of the source tree (specifically $HOME/.cache/crdb-test-fixtures/...). For bazel, we make sure the path is available (and writable) in the sandbox and we pass the path to the test through an env var.

Fixes #83599.

Release note: None
Epic: none

99699: spanconfig: integrate SpanConfigBounds with the Store and KVSubscriber r=ajwerner a=arulajmani

This patch integrates `SpanConfigBounds` with the `KVSubscriber` and
`spanconfig.Store`. The `spanconfig.Store` used by the `KVSubscriber`
now has a handle to the global tenant capability state. It uses this to
clamp any secondary tenant span configs that are not in conformance
before returning them.

By default, clamping of secondary tenant span configurations is turned
off. It can be enabled using the `spanconfig.bounds.enabled` cluster
setting. The setting is hidden.

Fixes: #99689
Informs #99911

Release note: None

99759: roachtest: set lower min range_max_bytes in multitenant_distsql test r=rharding6373 a=rharding6373

The multitenant_distsql roachtest relies on a smaller range size than the new default min of range_max_bytes (64MiB) due to performance and resource limitations. We set the COCKROACH_MIN_RANGE_MAX_BYTES to allow the test to use the smaller range.

Fixes: #99626
Epic: None

Release note: None

99813: spanconfig: do not fatal in NeedsSplit and ComputeSplitKey r=irfansharif a=arulajmani

See individual commits for details. 

Fixes #97336.

99839: schemachanger: Annotate all tables if ALTER TABLE IF EXISTS on non-existent table r=Xiang-Gu a=Xiang-Gu

Previously, if table `t` does not exist, `ALTER TABLE IF EXISTS t` will only mark `t` as non-existent. This is inadequate because for stmt like `ALTER TABLE IF EXISTS t ADD FOREIGN KEY REFERENCES t_other` we will not touch `t_other` and the validation logic will later complain that `t_other` is not fully resolved nor marked as non-existent. This commit fixes it by marking all tables in this ALTER TABLE stmt as non-existent if the `t` is non-existent, so we can pass the validation.

Fixes issues discovered in #99185
Epic: None

99865: roachtest: fix query used to get job status in backup/mixed-version r=srosenberg a=renatolabs

At the moment, we only query job status in mixed-version state (so we should always use `system.jobs`). However, the code added in this commit should continue to work once we start developing 23.2, as we're checking that the cluster version is at least 23.1 before using `crdb_internal.system_jobs`.

Epic: none

Release note: None

99878: jobs: change job_info.info_key to string r=dt a=dt

Release note: none.
Epic: none.

Hopefully we get this one in now before it is released and harder to change later. I think if we go with bytes, we'll spend the next several years typing convert_to over and over, or forgetting to and then typing it, when debugging.

Co-authored-by: rharding6373 <[email protected]>
Co-authored-by: Radu Berinde <[email protected]>
Co-authored-by: Arul Ajmani <[email protected]>
Co-authored-by: Xiang Gu <[email protected]>
Co-authored-by: Renato Costa <[email protected]>
Co-authored-by: David Taylor <[email protected]>
  • Loading branch information
7 people committed Mar 30, 2023
9 parents bba0bde + ee16008 + 9759972 + 554280c + 045e393 + 00ec8d3 + a83c0e1 + 8a0ce96 + 034db4b commit 99b655d
Show file tree
Hide file tree
Showing 77 changed files with 1,231 additions and 474 deletions.
2 changes: 1 addition & 1 deletion dev
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ fi
set -euo pipefail

# Bump this counter to force rebuilding `dev` on all machines.
DEV_VERSION=70
DEV_VERSION=71

THIS_DIR=$(cd "$(dirname "$0")" && pwd)
BINARY_DIR=$THIS_DIR/bin/dev-versions
Expand Down
2 changes: 2 additions & 0 deletions pkg/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -2061,6 +2061,7 @@ GO_TARGETS = [
"//pkg/testutils/storageutils:storageutils",
"//pkg/testutils/testcluster:testcluster",
"//pkg/testutils/testcluster:testcluster_test",
"//pkg/testutils/testfixtures:testfixtures",
"//pkg/testutils/zerofields:zerofields",
"//pkg/testutils/zerofields:zerofields_test",
"//pkg/testutils:testutils",
Expand Down Expand Up @@ -3202,6 +3203,7 @@ GET_X_DATA_TARGETS = [
"//pkg/testutils/sqlutils:get_x_data",
"//pkg/testutils/storageutils:get_x_data",
"//pkg/testutils/testcluster:get_x_data",
"//pkg/testutils/testfixtures:get_x_data",
"//pkg/testutils/zerofields:get_x_data",
"//pkg/ts:get_x_data",
"//pkg/ts/catalog:get_x_data",
Expand Down
1 change: 0 additions & 1 deletion pkg/ccl/storageccl/engineccl/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,4 @@

# Old benchmark data.
mvcc_data
# New benchmark data.
mvcc_data_*
2 changes: 1 addition & 1 deletion pkg/ccl/storageccl/engineccl/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ go_test(
"//pkg/testutils",
"//pkg/testutils/datapathutils",
"//pkg/testutils/storageutils",
"//pkg/testutils/testfixtures",
"//pkg/util/encoding",
"//pkg/util/hlc",
"//pkg/util/leaktest",
Expand All @@ -61,7 +62,6 @@ go_test(
"//pkg/util/randutil",
"//pkg/util/timeutil",
"@com_github_cockroachdb_datadriven//:datadriven",
"@com_github_cockroachdb_errors//oserror",
"@com_github_cockroachdb_pebble//:pebble",
"@com_github_cockroachdb_pebble//vfs",
"@com_github_cockroachdb_pebble//vfs/atomicfs",
Expand Down
137 changes: 65 additions & 72 deletions pkg/ccl/storageccl/engineccl/bench_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ import (
"context"
"fmt"
"math/rand"
"os"
"path/filepath"
"testing"

Expand All @@ -22,11 +21,11 @@ import (
"github.com/cockroachdb/cockroach/pkg/settings/cluster"
"github.com/cockroachdb/cockroach/pkg/storage"
"github.com/cockroachdb/cockroach/pkg/testutils"
"github.com/cockroachdb/cockroach/pkg/testutils/testfixtures"
"github.com/cockroachdb/cockroach/pkg/util/encoding"
"github.com/cockroachdb/cockroach/pkg/util/hlc"
"github.com/cockroachdb/cockroach/pkg/util/log"
"github.com/cockroachdb/cockroach/pkg/util/randutil"
"github.com/cockroachdb/errors/oserror"
)

// loadTestData writes numKeys keys in numBatches separate batches. Keys are
Expand All @@ -44,85 +43,82 @@ import (
// The creation of the database is time consuming, so the caller can choose
// whether to use a temporary or permanent location.
func loadTestData(
dirPrefix string, numKeys, numBatches, batchTimeSpan, valueBytes int,
) (storage.Engine, error) {
tb testing.TB, dirPrefix string, numKeys, numBatches, batchTimeSpan, valueBytes int,
) storage.Engine {
ctx := context.Background()

verStr := fmt.Sprintf("v%s", clusterversion.TestingBinaryVersion.String())
dir := fmt.Sprintf("%s_v%s_%d_%d_%d_%d", dirPrefix, verStr, numKeys, numBatches, batchTimeSpan, valueBytes)

exists := true
if _, err := os.Stat(dir); oserror.IsNotExist(err) {
exists = false
}

eng, err := storage.Open(
ctx,
storage.Filesystem(dir),
cluster.MakeTestingClusterSettings())
if err != nil {
return nil, err
}

absPath, err := filepath.Abs(dir)
if err != nil {
absPath = dir
}

if exists {
log.Infof(context.Background(), "using existing test data: %s", absPath)
testutils.ReadAllFiles(filepath.Join(dir, "*"))
return eng, nil
}
name := fmt.Sprintf("%s_v%s_%d_%d_%d_%d", dirPrefix, verStr, numKeys, numBatches, batchTimeSpan, valueBytes)
dir := testfixtures.ReuseOrGenerate(tb, name, func(dir string) {
eng, err := storage.Open(
ctx,
storage.Filesystem(dir),
cluster.MakeTestingClusterSettings())
if err != nil {
tb.Fatal(err)
}

log.Infof(context.Background(), "creating test data: %s", absPath)
log.Infof(context.Background(), "creating test data: %s", dir)

// Generate the same data every time.
rng := rand.New(rand.NewSource(1449168817))
// Generate the same data every time.
rng := rand.New(rand.NewSource(1449168817))

keys := make([]roachpb.Key, numKeys)
for i := 0; i < numKeys; i++ {
keys[i] = roachpb.Key(encoding.EncodeUvarintAscending([]byte("key-"), uint64(i)))
}
keys := make([]roachpb.Key, numKeys)
for i := 0; i < numKeys; i++ {
keys[i] = roachpb.Key(encoding.EncodeUvarintAscending([]byte("key-"), uint64(i)))
}

sstTimestamps := make([]int64, numBatches)
for i := 0; i < len(sstTimestamps); i++ {
sstTimestamps[i] = int64((i + 1) * batchTimeSpan)
}
sstTimestamps := make([]int64, numBatches)
for i := 0; i < len(sstTimestamps); i++ {
sstTimestamps[i] = int64((i + 1) * batchTimeSpan)
}

var batch storage.Batch
var minWallTime int64
for i, key := range keys {
if scaled := len(keys) / numBatches; (i % scaled) == 0 {
if i > 0 {
log.Infof(ctx, "committing (%d/~%d)", i/scaled, numBatches)
if err := batch.Commit(false /* sync */); err != nil {
return nil, err
}
batch.Close()
if err := eng.Flush(); err != nil {
return nil, err
var batch storage.Batch
var minWallTime int64
for i, key := range keys {
if scaled := len(keys) / numBatches; (i % scaled) == 0 {
if i > 0 {
log.Infof(ctx, "committing (%d/~%d)", i/scaled, numBatches)
if err := batch.Commit(false /* sync */); err != nil {
tb.Fatal(err)
}
batch.Close()
if err := eng.Flush(); err != nil {
tb.Fatal(err)
}
}
batch = eng.NewBatch()
minWallTime = sstTimestamps[i/scaled]
}
timestamp := hlc.Timestamp{WallTime: minWallTime + rand.Int63n(int64(batchTimeSpan))}
value := roachpb.MakeValueFromBytes(randutil.RandBytes(rng, valueBytes))
value.InitChecksum(key)
if err := storage.MVCCPut(ctx, batch, nil, key, timestamp, hlc.ClockTimestamp{}, value, nil); err != nil {
tb.Fatal(err)
}
batch = eng.NewBatch()
minWallTime = sstTimestamps[i/scaled]
}
timestamp := hlc.Timestamp{WallTime: minWallTime + rand.Int63n(int64(batchTimeSpan))}
value := roachpb.MakeValueFromBytes(randutil.RandBytes(rng, valueBytes))
value.InitChecksum(key)
if err := storage.MVCCPut(ctx, batch, nil, key, timestamp, hlc.ClockTimestamp{}, value, nil); err != nil {
return nil, err
if err := batch.Commit(false /* sync */); err != nil {
tb.Fatal(err)
}
}
if err := batch.Commit(false /* sync */); err != nil {
return nil, err
}
batch.Close()
if err := eng.Flush(); err != nil {
return nil, err
}
batch.Close()
if err := eng.Flush(); err != nil {
tb.Fatal(err)
}
eng.Close()
})

return eng, nil
log.Infof(context.Background(), "using test data: %s", dir)
eng, err := storage.Open(
ctx,
storage.Filesystem(dir),
cluster.MakeTestingClusterSettings(),
storage.MustExist,
)
if err != nil {
tb.Fatal(err)
}
testutils.ReadAllFiles(filepath.Join(dir, "*"))
return eng
}

// runIterate benchmarks iteration over the entire keyspace within time bounds
Expand All @@ -140,10 +136,7 @@ func runIterate(

// Store the database in this directory so we don't have to regenerate it on
// each benchmark run.
eng, err := loadTestData("mvcc_data", numKeys, numBatches, batchTimeSpan, valueBytes)
if err != nil {
b.Fatal(err)
}
eng := loadTestData(b, "mvcc_data", numKeys, numBatches, batchTimeSpan, valueBytes)
defer eng.Close()

b.SetBytes(int64(numKeys * valueBytes))
Expand Down
17 changes: 17 additions & 0 deletions pkg/cmd/dev/bench.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ package main

import (
"fmt"
"os"
"path/filepath"
"strings"

"github.com/spf13/cobra"
Expand Down Expand Up @@ -154,8 +156,23 @@ func (d *dev) bench(cmd *cobra.Command, commandLine []string) error {
}
args = append(args, goTestArgs...)
}
args = append(args, d.getGoTestEnvArgs()...)
args = append(args, d.getTestOutputArgs(false /* stress */, verbose, showLogs, streamOutput)...)
args = append(args, additionalBazelArgs...)
logCommand("bazel", args...)
return d.exec.CommandContextInheritingStdStreams(ctx, "bazel", args...)
}

func (d *dev) getGoTestEnvArgs() []string {
var goTestEnv []string
// Make the `$HOME/.cache/crdb-test-fixtures` directory available for reusable
// test fixtures, if available. See testfixtures.ReuseOrGenerate().
if cacheDir, err := os.UserCacheDir(); err == nil {
dir := filepath.Join(cacheDir, "crdb-test-fixtures")
if err := os.MkdirAll(dir, 0755); err == nil {
goTestEnv = append(goTestEnv, "--test_env", fmt.Sprintf("COCKROACH_TEST_FIXTURES_DIR=%s", dir))
goTestEnv = append(goTestEnv, fmt.Sprintf("--sandbox_writable_path=%s", dir))
}
}
return goTestEnv
}
38 changes: 36 additions & 2 deletions pkg/cmd/roachtest/tests/mixed_version_backup.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ import (
"github.com/cockroachdb/cockroach/pkg/util/protoutil"
"github.com/cockroachdb/cockroach/pkg/util/retry"
"github.com/cockroachdb/cockroach/pkg/util/timeutil"
"github.com/cockroachdb/cockroach/pkg/util/version"
"github.com/cockroachdb/errors"
"golang.org/x/sync/errgroup"
)
Expand All @@ -54,6 +55,14 @@ var (
Multiplier: 1.5,
MaxRetries: 50,
}

v231 = func() *version.Version {
v, err := version.Parse("v23.1.0")
if err != nil {
panic(fmt.Sprintf("failure parsing version: %v", err))
}
return v
}()
)

// sanitizeVersionForBackup takes the string representation of a
Expand All @@ -63,6 +72,25 @@ func sanitizeVersionForBackup(v string) string {
return invalidVersionRE.ReplaceAllString(clusterupgrade.VersionMsg(v), "")
}

// hasInternalSystemJobs returns true if the cluster is expected to
// have the `crdb_internal.system_jobs` vtable in the mixed-version
// context passed. If so, it should be used instead of `system.jobs`
// when querying job status.
func hasInternalSystemJobs(tc *mixedversion.Context) bool {
lowestVersion := tc.FromVersion // upgrades
if tc.FromVersion == clusterupgrade.MainVersion {
lowestVersion = tc.ToVersion // downgrades
}

// Add 'v' prefix expected by `version` package.
lowestVersion = "v" + lowestVersion
sv, err := version.Parse(lowestVersion)
if err != nil {
panic(fmt.Errorf("internal error: test context version (%s) expected to be parseable: %w", lowestVersion, err))
}
return sv.AtLeast(v231)
}

type (
// backupOption is an option passed to the `BACKUP` command (i.e.,
// `WITH ...` portion).
Expand Down Expand Up @@ -298,11 +326,17 @@ func (mvb *mixedVersionBackup) waitForJobSuccess(
var lastErr error
node, db := h.RandomDB(rng, mvb.roachNodes)
l.Printf("querying job status through node %d", node)

jobsQuery := "system.jobs WHERE id = $1"
if hasInternalSystemJobs(h.Context()) {
jobsQuery = fmt.Sprintf("(%s)", jobutils.InternalSystemJobsBaseQuery)
}
for r := retry.StartWithCtx(ctx, backupCompletionRetryOptions); r.Next(); {
var status string
var payloadBytes []byte
err := db.QueryRow(fmt.Sprintf(`SELECT status, payload FROM (%s)`,
jobutils.InternalSystemJobsBaseQuery), jobID).Scan(&status, &payloadBytes)
err := db.QueryRow(
fmt.Sprintf(`SELECT status, payload FROM %s`, jobsQuery), jobID,
).Scan(&status, &payloadBytes)
if err != nil {
lastErr = fmt.Errorf("error reading (status, payload) for job %d: %w", jobID, err)
l.Printf("%v", lastErr)
Expand Down
11 changes: 8 additions & 3 deletions pkg/cmd/roachtest/tests/multitenant_distsql.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,14 @@ func runMultiTenantDistSQL(
timeoutMillis int,
) {
c.Put(ctx, t.Cockroach(), "./cockroach")
c.Start(ctx, t.L(), option.DefaultStartOpts(), install.MakeClusterSettings(install.SecureOption(true)), c.Node(1))
c.Start(ctx, t.L(), option.DefaultStartOpts(), install.MakeClusterSettings(install.SecureOption(true)), c.Node(2))
c.Start(ctx, t.L(), option.DefaultStartOpts(), install.MakeClusterSettings(install.SecureOption(true)), c.Node(3))
// This test sets a smaller default range size than the default due to
// performance and resource limitations. We set the minimum range max bytes to
// 1 byte to bypass the guardrails.
settings := install.MakeClusterSettings(install.SecureOption(true))
settings.Env = append(settings.Env, "COCKROACH_MIN_RANGE_MAX_BYTES=1")
c.Start(ctx, t.L(), option.DefaultStartOpts(), settings, c.Node(1))
c.Start(ctx, t.L(), option.DefaultStartOpts(), settings, c.Node(2))
c.Start(ctx, t.L(), option.DefaultStartOpts(), settings, c.Node(3))

const (
tenantID = 11
Expand Down
Loading

0 comments on commit 99b655d

Please sign in to comment.