Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Skip tablet schema in rowset meta during ingestion. (backport #50873) #51843

Closed
wants to merge 1 commit into from

Conversation

mergify[bot]
Copy link
Contributor

@mergify mergify bot commented Oct 12, 2024

Why I'm doing:

Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues:

  1. When the import frequency is very high, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB.
  2. When the tablet has a very large number of columns (e.g., 10,000 columns), the time taken to persist the Rowset metadata increases, especially when the imported data volume is small.

These two issues can eventually reduce the efficiency of real-time imports.

What I'm doing:

This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat.

One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema.

However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata.

So I choose a more conservative solution, and the main changes are as follows:

  1. Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.
  2. Update the RowsetMeta that does not store schema when updating the tablet schema.

If the BE exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema.

Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema.

Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat.

However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance.

To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR.

Below is a test based on this PR:
a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times.

Branch Table type Total cost time
main-8f128b Duplicate 1043.77(s)
this pr Duplicate 178.49(s)
main-8f128b Primary 188.46(s)
this pr Primary 186.68(s)

Fixes #issue

What type of PR is this:

  • BugFix
  • Feature
  • Enhancement
  • Refactor
  • UT
  • Doc
  • Tool

Does this PR entail a change in behavior?

  • Yes, this PR will result in a change in behavior.
  • No, this PR will not result in a change in behavior.

If yes, please specify the type of change:

  • Interface/UI changes: syntax, type conversion, expression evaluation, display information
  • Parameter changes: default values, similar parameters but with different default values
  • Policy changes: use new policy to replace old one, functionality automatically enabled
  • Feature removed
  • Miscellaneous: upgrade & downgrade compatibility, etc.

Checklist:

  • I have added test cases for my bug fix or my new feature
  • This pr needs user documentation (for new or modified features or behaviors)
    • I have added documentation for my new feature or new function
  • This is a backport pr

Bugfix cherry-pick branch check:

  • I have checked the version labels which the pr will be auto-backported to the target branch
    • 3.3
    • 3.2
    • 3.1
    • 3.0
    • 2.5

This is an automatic backport of pull request #50873 done by [Mergify](https://mergify.com). ## Why I'm doing: Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues:
  1. When the import frequency is very high, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB.
  2. When the tablet has a very large number of columns (e.g., 10,000 columns), the time taken to persist the Rowset metadata increases, especially when the imported data volume is small.

These two issues can eventually reduce the efficiency of real-time imports.

What I'm doing:

This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat.

One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema.

However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata.

So I choose a more conservative solution, and the main changes are as follows:

  1. Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.
  2. Update the RowsetMeta that does not store schema when updating the tablet schema.

If the BE exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema.

Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema.

Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat.

However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance.

To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR.

Below is a test based on this PR:
a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times.

Branch Table type Total cost time
main-8f128b Duplicate 1043.77(s)
this pr Duplicate 178.49(s)
main-8f128b Primary 188.46(s)
this pr Primary 186.68(s)

Fixes #issue

What type of PR is this:

  • BugFix
  • Feature
  • Enhancement
  • Refactor
  • UT
  • Doc
  • Tool

Does this PR entail a change in behavior?

  • Yes, this PR will result in a change in behavior.
  • No, this PR will not result in a change in behavior.

If yes, please specify the type of change:

  • Interface/UI changes: syntax, type conversion, expression evaluation, display information
  • Parameter changes: default values, similar parameters but with different default values
  • Policy changes: use new policy to replace old one, functionality automatically enabled
  • Feature removed
  • Miscellaneous: upgrade & downgrade compatibility, etc.

Checklist:

  • I have added test cases for my bug fix or my new feature
  • This pr needs user documentation (for new or modified features or behaviors)
    • I have added documentation for my new feature or new function
  • This is a backport pr

…0873)

## Why I'm doing:
Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues:

1. **When the import frequency is very high**, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB.
2. **When the tablet has a very large number of columns (e.g., 10,000 columns)**, the time taken to persist the Rowset metadata increases, especially when the imported data volume is small.

These two issues can eventually reduce the efficiency of real-time imports.

## What I'm doing:
This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat.

One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema.

However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata.

So I choose a more conservative solution, and the main changes are as follows:
1. **Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.**
2. **Update the RowsetMeta that does not store schema when updating the tablet schema.**

If the BE  exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema.

Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema.

Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat.

However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance.

To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR.

Below is a test based on this PR:
a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times.

| Branch | Table type |Total cost time |
|----------|----------|----------|
|  main-8f128b  | Duplicate  | 1043.77(s)  |
|  this pr  |  Duplicate  | 178.49(s)  |
|  main-8f128b  | Primary  | 188.46(s)  |
|  this pr  |  Primary  | 186.68(s)  |

Signed-off-by: sevev <[email protected]>
Signed-off-by: zhangqiang <[email protected]>
(cherry picked from commit 3005729)

# Conflicts:
#	be/src/common/config.h
#	be/src/storage/compaction_task.h
#	be/src/storage/tablet.cpp
#	be/src/storage/tablet.h
#	be/src/storage/tablet_meta.cpp
#	be/src/storage/tablet_meta.h
#	be/src/storage/txn_manager.cpp
Copy link
Contributor Author

mergify bot commented Oct 12, 2024

Cherry-pick of 3005729 has failed:

On branch mergify/bp/branch-3.2/pr-50873
Your branch is up to date with 'origin/branch-3.2'.

You are currently cherry-picking commit 3005729289.
  (fix conflicts and run "git cherry-pick --continue")
  (use "git cherry-pick --skip" to skip this patch)
  (use "git cherry-pick --abort" to cancel the cherry-pick operation)

Changes to be committed:
	modified:   be/src/storage/base_tablet.h
	modified:   be/src/storage/data_dir.cpp
	modified:   be/src/storage/olap_common.h
	modified:   be/src/storage/rowset/rowset_meta.h
	modified:   be/src/storage/rowset/rowset_meta_manager.cpp
	modified:   be/src/storage/rowset/rowset_meta_manager.h
	modified:   be/src/storage/tablet_meta_manager.cpp
	modified:   be/src/storage/tablet_meta_manager.h
	modified:   be/src/storage/tablet_updates.cpp
	modified:   be/src/storage/tablet_updates.h
	modified:   be/test/storage/tablet_updates_test.cpp
	modified:   be/test/storage/tablet_updates_test.h

Unmerged paths:
  (use "git add <file>..." to mark resolution)
	both modified:   be/src/common/config.h
	both modified:   be/src/storage/compaction_task.h
	both modified:   be/src/storage/tablet.cpp
	both modified:   be/src/storage/tablet.h
	both modified:   be/src/storage/tablet_meta.cpp
	both modified:   be/src/storage/tablet_meta.h
	both modified:   be/src/storage/txn_manager.cpp

To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally

Copy link
Contributor Author

mergify bot commented Oct 12, 2024

@mergify[bot]: Backport conflict, please reslove the conflict and resubmit the pr

@mergify mergify bot closed this Oct 12, 2024
auto-merge was automatically disabled October 12, 2024 09:11

Pull request was closed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant