Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support runtime chunk deduplication #1507

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,7 @@ default = [
"backend-s3",
"backend-http-proxy",
"backend-localdisk",
"dedup",
]
virtiofs = [
"nydus-service/virtiofs",
Expand All @@ -116,6 +117,8 @@ backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"]
backend-s3 = ["nydus-storage/backend-s3"]

dedup = ["nydus-storage/dedup"]

[workspace]
members = [
"api",
Expand Down
23 changes: 22 additions & 1 deletion docs/data-deduplication.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,4 +164,25 @@ So Nydus provides a node level CAS system to reduce data downloaded from the reg

The node level CAS system helps to achieve O4 and O5.

# Node Level CAS System (WIP)
# Node Level CAS System
Data deduplication can also be achieved when accessing Nydus images. The key idea is to maintain information about data chunks available on local host by using a database.
When a chunk is needed but not available in the uncompressed data blob files yet, we will query the database using chunk digest as key.
If a record with the same chunk digest already exists, it will be reused.
We call such a system as CAS (Content Addressable Storage).

## Chunk Deduplication by Using CAS as L2 Cache
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still seems to be an experimental feature, do we still need to consider the cas.db record recycling?

In this chunk deduplication mode, the CAS system works as an L2 cache to provide chunk data on demand, and it keeps Nydus bootstrap blobs as is.
It works in this way:
1. query the database when a chunk is needed but not available yet
2. copy data from source blob to target blob using `copy_file_range` if a record with the same chunk digest
3. download chunk data from remote if there's no record in database
4. insert a new record into the database for just downloaded chunk so it can be reused later.

![chunk_dedup_l2cache](images/chunk_dedup_l2_cache.png)

A data download operation can be avoided if a chunk already exists in the database.
And if the underlying filesystem support data reference, `copy_file_range` will use reference instead of data copy, thus reduce storage space consumption.
This design has benefit of robustness, the target blob file doesn't have any dependency on the database and source blob files, so ease garbage collection.
jiangliu marked this conversation as resolved.
Show resolved Hide resolved
But it depends on capability of underlying filesystem to reduce storage consumption.

## Chunk Deduplication by Rebuilding Nydus Bootstrap (WIP)
265 changes: 265 additions & 0 deletions docs/images/chunk_dedup_l2_cache.drawio

Large diffs are not rendered by default.

Binary file added docs/images/chunk_dedup_l2_cache.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
75 changes: 75 additions & 0 deletions smoke/tests/chunk_dedup_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
// Copyright 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0

package tests

import (
"path/filepath"
"testing"

"github.com/containerd/nydus-snapshotter/pkg/converter"
"github.com/dragonflyoss/image-service/smoke/tests/texture"
"github.com/dragonflyoss/image-service/smoke/tests/tool"
"github.com/dragonflyoss/image-service/smoke/tests/tool/test"
"github.com/opencontainers/go-digest"
"github.com/stretchr/testify/require"
)

const (
paramIteration = "iteration"
)

type ChunkDedupTestSuite struct {
t *testing.T
}

func (z *ChunkDedupTestSuite) TestChunkDedup() test.Generator {

scenarios := tool.DescartesIterator{}
scenarios.Dimension(paramIteration, []interface{}{1, 2})

return func() (name string, testCase test.Case) {
if !scenarios.HasNext() {
return
}
scenario := scenarios.Next()

ctx := tool.DefaultContext(z.t)
ctx.Runtime.ChunkDedupDb = ctx.Env.WorkDir + "/cas.db"

return scenario.Str(), func(t *testing.T) {
z.testMakeLayers(*ctx, t)
}
}
}

func (z *ChunkDedupTestSuite) testMakeLayers(ctx tool.Context, t *testing.T) {

// Prepare work directory
ctx.PrepareWorkDir(t)
defer ctx.Destroy(t)

lowerLayer := texture.MakeLowerLayer(t, filepath.Join(ctx.Env.WorkDir, "source"))
lowerOCIBlobDigest, lowerRafsBlobDigest := lowerLayer.PackRef(t, ctx, ctx.Env.BlobDir, ctx.Build.OCIRefGzip)
mergeOption := converter.MergeOption{
BuilderPath: ctx.Binary.Builder,
ChunkDictPath: "",
OCIRef: true,
}
actualDigests, lowerBootstrap := tool.MergeLayers(t, ctx, mergeOption, []converter.Layer{
{
Digest: lowerRafsBlobDigest,
OriginalDigest: &lowerOCIBlobDigest,
},
})
require.Equal(t, []digest.Digest{lowerOCIBlobDigest}, actualDigests)

// Verify lower layer mounted by nydusd
ctx.Env.BootstrapPath = lowerBootstrap
tool.Verify(t, ctx, lowerLayer.FileTree)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may need a way to check if the CAS works.

}

func TestChunkDedup(t *testing.T) {
test.Run(t, &ChunkDedupTestSuite{t: t})
}
1 change: 1 addition & 0 deletions smoke/tests/tool/context.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ type RuntimeContext struct {
RafsMode string
EnablePrefetch bool
AmplifyIO uint64
ChunkDedupDb string
}

type EnvContext struct {
Expand Down
4 changes: 4 additions & 0 deletions smoke/tests/tool/nydusd.go
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ type NydusdConfig struct {
AccessPattern bool
PrefetchFiles []string
AmplifyIO uint64
ChunkDedupDb string
}

type Nydusd struct {
Expand Down Expand Up @@ -205,6 +206,9 @@ func (nydusd *Nydusd) Mount() error {
if len(nydusd.BootstrapPath) > 0 {
args = append(args, "--bootstrap", nydusd.BootstrapPath)
}
if len(nydusd.ChunkDedupDb) > 0 {
args = append(args, "--dedup-db", nydusd.ChunkDedupDb)
}

cmd := exec.Command(nydusd.NydusdPath, args...)
cmd.Stdout = os.Stdout
Expand Down
1 change: 1 addition & 0 deletions smoke/tests/tool/verify.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ func Verify(t *testing.T, ctx Context, expectedFiles map[string]*File) {
RafsMode: ctx.Runtime.RafsMode,
DigestValidate: false,
AmplifyIO: ctx.Runtime.AmplifyIO,
ChunkDedupDb: ctx.Runtime.ChunkDedupDb,
}

nydusd, err := NewNydusd(config)
Expand Down
23 changes: 21 additions & 2 deletions src/bin/nydusd/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ use nydus_service::{
create_daemon, create_fuse_daemon, create_vfs_backend, validate_threads_configuration,
Error as NydusError, FsBackendMountCmd, FsBackendType, ServiceArgs,
};
use nydus_storage::cache::CasMgr;

use crate::api_server_glue::ApiServerController;

Expand All @@ -50,7 +51,7 @@ fn thread_validator(v: &str) -> std::result::Result<String, String> {
}

fn append_fs_options(app: Command) -> Command {
app.arg(
let mut app = app.arg(
Arg::new("bootstrap")
.long("bootstrap")
.short('B')
Expand Down Expand Up @@ -87,7 +88,18 @@ fn append_fs_options(app: Command) -> Command {
.help("Mountpoint within the FUSE/virtiofs device to mount the RAFS/passthroughfs filesystem")
.default_value("/")
.required(false),
)
);

#[cfg(feature = "dedup")]
{
app = app.arg(
Arg::new("dedup-db")
.long("dedup-db")
.help("Database file for chunk deduplication"),
);
}

app
}

fn append_fuse_options(app: Command) -> Command {
Expand Down Expand Up @@ -750,6 +762,13 @@ fn main() -> Result<()> {
dump_program_info();
handle_rlimit_nofile_option(&args, "rlimit-nofile")?;

#[cfg(feature = "dedup")]
if let Some(db) = args.get_one::<String>("dedup-db") {
let mgr = CasMgr::new(db).map_err(|e| eother!(format!("{}", e)))?;
info!("Enable chunk deduplication by using database at {}", db);
CasMgr::set_singleton(mgr);
}

match args.subcommand_name() {
Some("singleton") => {
// Safe to unwrap because the subcommand is `singleton`.
Expand Down
1 change: 0 additions & 1 deletion storage/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@ regex = "1.7.0"
toml = "0.5"

[features]
default = ["dedup"]
backend-localdisk = []
backend-localdisk-gpt = ["gpt", "backend-localdisk"]
backend-localfs = []
Expand Down
24 changes: 22 additions & 2 deletions storage/src/cache/cachedfile.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ use std::collections::HashSet;
use std::fs::File;
use std::io::{ErrorKind, Read, Result};
use std::mem::ManuallyDrop;
use std::ops::Deref;
use std::os::unix::io::{AsRawFd, RawFd};
use std::sync::atomic::{AtomicBool, AtomicU32, Ordering};
use std::sync::{Arc, Mutex};
Expand All @@ -29,7 +30,7 @@ use tokio::runtime::Runtime;
use crate::backend::BlobReader;
use crate::cache::state::ChunkMap;
use crate::cache::worker::{AsyncPrefetchConfig, AsyncPrefetchMessage, AsyncWorkerMgr};
use crate::cache::{BlobCache, BlobIoMergeState};
use crate::cache::{BlobCache, BlobIoMergeState, CasMgr};
use crate::device::{
BlobChunkInfo, BlobInfo, BlobIoDesc, BlobIoRange, BlobIoSegment, BlobIoTag, BlobIoVec,
BlobObject, BlobPrefetchRequest,
Expand Down Expand Up @@ -133,8 +134,10 @@ pub(crate) struct FileCacheEntry {
pub(crate) blob_info: Arc<BlobInfo>,
pub(crate) cache_cipher_object: Arc<Cipher>,
pub(crate) cache_cipher_context: Arc<CipherContext>,
pub(crate) cas_mgr: Option<Arc<CasMgr>>,
pub(crate) chunk_map: Arc<dyn ChunkMap>,
pub(crate) file: Arc<File>,
pub(crate) file_path: Arc<String>,
pub(crate) meta: Option<FileCacheMeta>,
pub(crate) metrics: Arc<BlobcacheMetrics>,
pub(crate) prefetch_state: Arc<AtomicU32>,
Expand Down Expand Up @@ -182,13 +185,16 @@ impl FileCacheEntry {
}

fn delay_persist_chunk_data(&self, chunk: Arc<dyn BlobChunkInfo>, buffer: Arc<DataBuffer>) {
let blob_info = self.blob_info.clone();
let delayed_chunk_map = self.chunk_map.clone();
let file = self.file.clone();
let file_path = self.file_path.clone();
let metrics = self.metrics.clone();
let is_raw_data = self.is_raw_data;
let is_cache_encrypted = self.is_cache_encrypted;
let cipher_object = self.cache_cipher_object.clone();
let cipher_context = self.cache_cipher_context.clone();
let cas_mgr = self.cas_mgr.clone();

metrics.buffered_backend_size.add(buffer.size() as u64);
self.runtime.spawn_blocking(move || {
Expand Down Expand Up @@ -240,6 +246,11 @@ impl FileCacheEntry {
};
let res = Self::persist_cached_data(&file, offset, buf);
Self::_update_chunk_pending_status(&delayed_chunk_map, chunk.as_ref(), res.is_ok());
if let Some(mgr) = cas_mgr {
if let Err(e) = mgr.record_chunk(&blob_info, chunk.deref(), file_path.as_ref()) {
warn!("failed to record chunk state for dedup, {}", e);
}
}
});
}

Expand Down Expand Up @@ -973,13 +984,22 @@ impl FileCacheEntry {

trace!("dispatch single io range {:?}", req);
for (i, chunk) in req.chunks.iter().enumerate() {
let is_ready = match self.chunk_map.check_ready_and_mark_pending(chunk.as_ref()) {
let mut is_ready = match self.chunk_map.check_ready_and_mark_pending(chunk.as_ref()) {
Ok(true) => true,
Ok(false) => false,
Err(StorageError::Timeout) => false, // Retry if waiting for inflight IO timeouts
Err(e) => return Err(einval!(e)),
};

if !is_ready {
if let Some(mgr) = self.cas_mgr.as_ref() {
is_ready = mgr.dedup_chunk(&self.blob_info, chunk.deref(), &self.file);
if is_ready {
self.update_chunk_pending_status(chunk.deref(), true);
}
}
}

// Directly read chunk data from file cache into user buffer iff:
// - the chunk is ready in the file cache
// - data in the file cache is plaintext.
Expand Down
5 changes: 3 additions & 2 deletions storage/src/cache/dedup/db.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ use std::path::Path;

use r2d2::{Pool, PooledConnection};
use r2d2_sqlite::SqliteConnectionManager;
use rusqlite::{Connection, DropBehavior, OptionalExtension, Transaction};
use rusqlite::{Connection, DropBehavior, OpenFlags, OptionalExtension, Transaction};

use super::Result;

Expand All @@ -24,7 +24,8 @@ impl CasDb {
}

pub fn from_file(db_path: impl AsRef<Path>) -> Result<CasDb> {
let mgr = SqliteConnectionManager::file(db_path);
let mgr = SqliteConnectionManager::file(db_path)
.with_flags(OpenFlags::SQLITE_OPEN_CREATE | OpenFlags::SQLITE_OPEN_READ_WRITE);
let pool = r2d2::Pool::new(mgr)?;
let conn = pool.get()?;

Expand Down
Loading