Skip to content
This repository has been archived by the owner on Jun 23, 2022. It is now read-only.

refactor: simplify mutation_log write_pending_mutations #436

Merged
merged 4 commits into from
Apr 13, 2020

Conversation

neverchanje
Copy link
Contributor

@neverchanje neverchanje commented Apr 9, 2020

This PR is a minor refactoring on mutation_log_shared::write_pending_mutations and mutation_log_private::write_pending_mutations, which are mostly alike.

It breaks down write_pending_mutations by adding a new function commit_pending_mutations, and shortens the lines of code. For the private log, I also refactored the callback of commit_log_block, simplify the logic. (the original version is:

dassert(_is_writing.load(std::memory_order_relaxed), "");
auto hdr = (log_block_header *)block->front().data();
dassert(hdr->magic == 0xdeadbeef, "header magic is changed: 0x%x", hdr->magic);
if (err == ERR_OK) {
dassert(sz == block->size(),
"log write size must equal to the given size: %d vs %d",
(int)sz,
block->size());
dassert(sz == sizeof(log_block_header) + hdr->length,
"log write size must equal to (header size + data size): %d vs (%d + %d)",
(int)sz,
(int)sizeof(log_block_header),
hdr->length);
// flush to ensure that there is no gap between private log and in-memory buffer
// so that we can get all mutations in learning process.
//
// FIXME : the file could have been closed
lf->flush();
// update _private_max_commit_on_disk after written into log file done
update_max_commit_on_disk(max_commit);
} else {
derror("write private log failed, err = %s", err.to_string());
}
// here we use _is_writing instead of _issued_write.expired() to check writing done,
// because the following callbacks may run before "block" released, which may cause
// the next init_prepare() not starting the write.
_is_writing.store(false, std::memory_order_relaxed);
// notify error when necessary
if (err != ERR_OK) {
if (_io_error_callback) {
_io_error_callback(err);
}
} else {
// start to write if possible
_plock.lock();
if (!_is_writing.load(std::memory_order_acquire) && _pending_write &&
(static_cast<uint32_t>(_pending_write->size()) >= _batch_buffer_bytes ||
static_cast<uint32_t>(_pending_write->data().size()) >=
_batch_buffer_max_count ||
flush_interval_expired())) {
write_pending_mutations(true);
} else {
_plock.unlock();
}
}
},
0);
, this may help when you review this PR)

@neverchanje neverchanje marked this pull request as ready for review April 10, 2020 03:38
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants