-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HUDI-4465] Optimizing file-listing sequence of Metadata Table #6016
Conversation
bf5f6ae
to
ecf4114
Compare
hey @alexeykudinkin : can you link the right jira for the patch. |
ecf4114
to
0ae8d0e
Compare
0ae8d0e
to
6194091
Compare
40e7639
to
178c9f7
Compare
.withIndexConfig(HoodieIndexConfig.newBuilder().withIndexType(HoodieIndex.IndexType.BUCKET).withBucketNum("1").build()) | ||
.build(); | ||
|
||
Properties props = getPropertiesForKeyGen(true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pass HoodieTableConfig.POPULATE_META_FIELDS.defaultValue()
instead of hard-coding true?
@@ -398,7 +398,7 @@ public HoodieArchivedTimeline getArchivedTimeline(String startTs) { | |||
public void validateTableProperties(Properties properties) { | |||
// Once meta fields are disabled, it cant be re-enabled for a given table. | |||
if (!getTableConfig().populateMetaFields() | |||
&& Boolean.parseBoolean((String) properties.getOrDefault(HoodieTableConfig.POPULATE_META_FIELDS.key(), HoodieTableConfig.POPULATE_META_FIELDS.defaultValue()))) { | |||
&& Boolean.parseBoolean((String) properties.getOrDefault(HoodieTableConfig.POPULATE_META_FIELDS.key(), HoodieTableConfig.POPULATE_META_FIELDS.defaultValue().toString()))) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it necessary? it's already being type cast to String
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not
} | ||
|
||
/** | ||
* TODO elaborate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
todo? javadoc only right?
// Read the content | ||
HoodieHFileReader<IndexedRecord> reader = new HoodieHFileReader<>(fs, pathForReader, content, Option.of(writerSchema)); | ||
HoodieHFileReader<IndexedRecord> reader = new HoodieHFileReader<>(null, pathForReader, content, Option.of(writerSchema)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could affect HFile reading. I believe there is some validation in HFile system or HFile's reader context for fs to be non-null. I think we should still pass fs
and still keep this line in HoodieHFileUtils#createHFileReader
:
Configuration conf = new Configuration(false);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, i checked it and it actually doesn't use fs
at all
private List<String> getAllPartitionPathsUnchecked() { | ||
try { | ||
if (partitionColumns.length == 0) { | ||
return Collections.singletonList(""); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it be Collections.emptyList()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-partitioned table has exactly one partition, which we designate w/ ""
.getAllFilesInPartition(partitionPath); | ||
} | ||
|
||
@Override | ||
public Map<String, FileStatus[]> getAllFilesInPartitions(List<String> partitions) | ||
throws IOException { | ||
if (partitions.isEmpty()) { | ||
return Collections.emptyMap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If BaseHoodieTableFileIndex#getAllPartitionPathsUnchecked
returns Collections.singletonList("")
then should we add an entry for ""
in the map, or rather make getAllPartitionPathsUnchecked
return Collections.emptyList()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, this is somewhat dissonant, but that's just the way things are -- for non-partitioned tables it's assumed that the only partition that is there has to be identified by ""
|
||
String keyGen = properties.getProperty("hoodie.datasource.write.keygenerator.class"); | ||
if (!Objects.equals(keyGen, "org.apache.hudi.keygen.NonpartitionedKeyGenerator")) { | ||
builder.setPartitionFields("some_nonexistent_field"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
extract to constant to standardize across tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should actually standardize on this one, it's just to stop the bleeding in misconfigured tests
|
||
private def shouldValidatePartitionColumns(spark: SparkSession): Boolean = { | ||
// NOTE: We can't use helper, method nor the config-entry to stay compatible w/ Spark 2.4 | ||
spark.sessionState.conf.getConfString("spark.sql.sources.validatePartitionColumns", "true").toBoolean |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this go into Spark2Adapter or Spark2ParsePartitionUtil?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to
b038d97
to
705660e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Can you please rebase? We can land once the CI is green.
… reflection in the hot-path
…cing it w/ - `CachingPath` object - Invoking more performnat unsafe ctors/utils
Use `CachingPath` in `SparkHoodieTableFileIndex`; Tidying up;
Refactored `SerializablePath` to serialize `URI` instead (to avoid parsing, when deser); Tidying up
705660e
to
95ce817
Compare
// Make sure key-generator is configured properly | ||
ValidationUtils.checkArgument(recordKeyField == null || !recordKeyField.isEmpty(), | ||
"Record key field has to be non-empty!"); | ||
ValidationUtils.checkArgument(partitionPathField == null || !partitionPathField.isEmpty(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the validation message be more user-friendly? Let's say
"Partition path field has to be non-empty! For non-partitioned table, set key generator class to NonPartitionedKeyGenerator"
.
Also, why are these validations only added for SimpleKeyGenerator? Why not other keygens as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it makes sense to put suggestions into exception messages -- exception messages should be focused on the problem triggering it, rather than on potential to remedy it (empty partition-path field is usually a sign of misconfiguration, since there's no default value, meaning that user passes "" explicitly)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough.
Should we add these validations to other keygens as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed offline. It will be taken up separately. @alexeykudinkin in case if you have a JIRA, please link it here. For simple keygen we need the validation because of misconfiguration of some tests that were passing “” as partition fields.
...rk-datasource/hudi-spark/src/test/scala/org/apache/hudi/functional/TestTimeTravelQuery.scala
Outdated
Show resolved
Hide resolved
be9c519
to
46e53b5
Compare
Optimizes file-listing sequence of the Metadata Table to make sure it's on par or better than FS-based file-listing Change log: - Cleaned up avoidable instantiations of Hadoop's Path - Replaced new Path w/ createUnsafePath where possible - Cached TimestampFormatter, DateFormatter for timezone - Avoid loading defaults in Hadoop conf when init-ing HFile reader - Avoid re-instantiating BaseTableMetadata twice w/in BaseHoodieTableFileIndex - Avoid looking up FileSystem for every partition when listing partitioned table, instead do it just once
List<String> matchedPartitionPaths = FSUtils.getAllPartitionPaths(engineContext, metadataConfig, basePath) | ||
List<String> matchedPartitionPaths = getAllPartitionPathsUnchecked() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this change affects partitioned tables that meet both of these conditions 1) hoodie.table.partition.fields
not present in table config, and 2) metadata disabled. getAllPartitionPathsUnchecked()
treats them as non-partitioned table, and resulted in not loading any records.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The core of the problem is that hoodie.table.partition.fields
has to be properly configured -- the table would be considered non-partitioned by some parts of the code (outside of this one) so we need to make sure this is set properly.
…e#6016) Optimizes file-listing sequence of the Metadata Table to make sure it's on par or better than FS-based file-listing Change log: - Cleaned up avoidable instantiations of Hadoop's Path - Replaced new Path w/ createUnsafePath where possible - Cached TimestampFormatter, DateFormatter for timezone - Avoid loading defaults in Hadoop conf when init-ing HFile reader - Avoid re-instantiating BaseTableMetadata twice w/in BaseHoodieTableFileIndex - Avoid looking up FileSystem for every partition when listing partitioned table, instead do it just once
…e#6016) Optimizes file-listing sequence of the Metadata Table to make sure it's on par or better than FS-based file-listing Change log: - Cleaned up avoidable instantiations of Hadoop's Path - Replaced new Path w/ createUnsafePath where possible - Cached TimestampFormatter, DateFormatter for timezone - Avoid loading defaults in Hadoop conf when init-ing HFile reader - Avoid re-instantiating BaseTableMetadata twice w/in BaseHoodieTableFileIndex - Avoid looking up FileSystem for every partition when listing partitioned table, instead do it just once (cherry picked from commit 4af60dc)
…e#6016) Optimizes file-listing sequence of the Metadata Table to make sure it's on par or better than FS-based file-listing Change log: - Cleaned up avoidable instantiations of Hadoop's Path - Replaced new Path w/ createUnsafePath where possible - Cached TimestampFormatter, DateFormatter for timezone - Avoid loading defaults in Hadoop conf when init-ing HFile reader - Avoid re-instantiating BaseTableMetadata twice w/in BaseHoodieTableFileIndex - Avoid looking up FileSystem for every partition when listing partitioned table, instead do it just once
* [HUDI-4354] Add --force-empty-sync flag to deltastreamer (apache#6027) * [HUDI-4601] Read error from MOR table after compaction with timestamp partitioning (apache#6365) * read error from mor after compaction Co-authored-by: 吴文池 <[email protected]> * [MINOR] Update DOAP with 0.12.0 Release (apache#6413) * [HUDI-4529] Tweak some default config options for flink (apache#6287) * [HUDI-4632] Remove the force active property for flink1.14 profile (apache#6415) * [HUDI-4551] Tweak the default parallelism of flink pipeline to execution env parallelism (apache#6312) * [MINOR] Improve code style of CLI Command classes (apache#6427) * [HUDI-3625] Claim RFC-60 for Federated Storage Layer (apache#6440) * [HUDI-4616] Adding `PulsarSource` to `DeltaStreamer` to support ingesting from Apache Pulsar (apache#6386) - Adding PulsarSource to DeltaStreamer to support ingesting from Apache Pulsar. - Current implementation of PulsarSource is relying on "pulsar-spark-connector" to ingest using Spark instead of building similar pipeline from scratch. * [HUDI-3579] Add timeline commands in hudi-cli (apache#5139) * [HUDI-4638] Rename payload clazz and preCombine field options for flink sql (apache#6434) * Revert "[HUDI-4632] Remove the force active property for flink1.14 profile (apache#6415)" (apache#6449) This reverts commit 9055b2f. * [HUDI-4643] MergeInto syntax WHEN MATCHED is optional but must be set (apache#6443) * [HUDI-4644] Change default flink profile to 1.15.x (apache#6445) * [HUDI-4678] Claim RFC-61 for Snapshot view management (apache#6461) Co-authored-by: jian.feng <[email protected]> * [HUDI-4676] infer cleaner policy when write concurrency mode is OCC (apache#6459) * [HUDI-4676] infer cleaner policy when write concurrency mode is OCC Co-authored-by: jian.feng <[email protected]> * [HUDI-4683] Use enum class value for default value in flink options (apache#6453) * [HUDI-4584] Cleaning up Spark utilities (apache#6351) Cleans up Spark utilities and removes duplication * [HUDI-4686] Flip option 'write.ignore.failed' to default false (apache#6467) Also fix the flaky test * [HUDI-4515] Fix savepoints will be cleaned in keeping latest versions policy (apache#6267) * [HUDI-4637] Release thread in RateLimiter doesn't been terminated (apache#6433) * [HUDI-4698] Rename the package 'org.apache.flink.table.data' to avoid conflicts with flink table core (apache#6481) * HUDI-4687 add show_invalid_parquet procedure (apache#6480) Co-authored-by: zhanshaoxiong <shaoxiong0001@@gmail.com> * [HUDI-4584] Fixing `SQLConf` not being propagated to executor (apache#6352) Fixes `HoodieSparkUtils.createRDD` to make sure `SQLConf` is properly propagated to the executor (required by `AvroSerializer`) * [HUDI-4441] Log4j2 configuration fixes and removal of log4j1 dependencies (apache#6170) * [HUDI-4665] Flipping default for "ignore failed batch" config in streaming sink to false (apache#6450) * [HUDI-4713] Fix flaky ITTestHoodieDataSource#testAppendWrite (apache#6490) * [HUDI-4696] Fix flaky TestHoodieCombineHiveInputFormat (apache#6494) * Revert "[HUDI-3669] Add a remote request retry mechanism for 'Remotehoodietablefiles… (apache#5884)" (apache#6501) This reverts commit 660177b. * [Stacked on 6386] Fixing `DebeziumSource` to properly commit offsets; (apache#6416) * [HUDI-4399][RFC-57] Protobuf support in DeltaStreamer (apache#6111) * [HUDI-4703] use the historical schema to response time travel query (apache#6499) * [HUDI-4703] use the historical schema to response time travel query * [HUDI-4549] Remove avro from hudi-hive-sync-bundle and hudi-aws-bundle (apache#6472) * Remove avro shading from hudi-hive-sync-bundle and hudi-aws-bundle. Co-authored-by: Raymond Xu <[email protected]> * [HUDI-4482] remove guava and use caffeine instead for cache (apache#6240) * [HUDI-4483] Fix checkstyle in integ-test module (apache#6523) * [HUDI-4340] fix not parsable text DateTimeParseException by addng a method parseDateFromInstantTimeSafely for parsing timestamp when output metrics (apache#6000) * [DOCS] Add docs about javax.security.auth.login.LoginException when starting Hudi Sink Connector (apache#6255) * [HUDI-4327] Fixing flaky deltastreamer test (testCleanerDeleteReplacedDataWithArchive) (apache#6533) * [HUDI-4730] Fix batch job cannot clean old commits files (apache#6515) * [HUDI-4370] Fix batch job cannot clean old commits files Co-authored-by: jian.feng <[email protected]> * [HUDI-4740] Add metadata fields for hive catalog #createTable (apache#6541) * [HUDI-4695] Fixing flaky TestInlineCompaction#testCompactionRetryOnFailureBasedOnTime (apache#6534) * [HUDI-4193] change protoc version to unblock hudi compilation on m1 mac (apache#6535) * [HUDI-4438] Fix flaky TestCopyOnWriteActionExecutor#testPartitionMetafileFormat (apache#6546) * [MINOR] Fix typo in HoodieArchivalConfig (apache#6542) * [HUDI-4582] Support batch synchronization of partition to HMS to avoid timeout (apache#6347) Co-authored-by: xxhua <[email protected]> * [HUDI-4742] Fix AWS Glue partition's location is wrong when updatePartition (apache#6545) Co-authored-by: xxhua <[email protected]> * [HUDI-4418] Add support for ProtoKafkaSource (apache#6135) - Adds PROTO to Source.SourceType enum. - Handles PROTO type in SourceFormatAdapter by converting to Avro from proto Message objects. Conversion to Row goes Proto -> Avro -> Row currently. - Added ProtoClassBasedSchemaProvider to generate schemas for a proto class that is currently on the classpath. - Added ProtoKafkaSource which parses byte[] into a class that is on the path. - Added ProtoConversionUtil which exposes methods for creating schemas and translating from Proto messages to Avro GenericRecords. - Added KafkaSource which provides a base class for the other Kafka sources to use. * [HUDI-4642] Adding support to hudi-cli to repair deprecated partition (apache#6438) * [HUDI-4751] Fix owner instants for transaction manager api callers (apache#6549) * [HUDI-4739] Wrong value returned when key's length equals 1 (apache#6539) * extracts key fields Co-authored-by: 吴文池 <[email protected]> * [HUDI-4528] Add diff tool to compare commit metadata (apache#6485) * Add diff tool to compare commit metadata * Add partition level info to commits and compaction command * Partition support for compaction archived timeline * Add diff command test * [HUDI-4648] Support rename partition through CLI (apache#6569) * [HUDI-4775] Fixing incremental source for MOR table (apache#6587) * Fixing incremental source for MOR table * Remove unused import Co-authored-by: Sagar Sumit <[email protected]> * [HUDI-4694] Print testcase running time for CI jobs (apache#6586) * [RFC] Claim RFC-62 for Diagnostic Reporter (apache#6599) Co-authored-by: yuezhang <[email protected]> * [minor] following HUDI-4739, fix the extraction for simple record keys (apache#6594) * [HUDI-4619] Add a remote request retry mechanism for 'Remotehoodietablefilesystemview'. (apache#6393) * [HUDI-4720] Fix HoodieInternalRow return wrong num of fields when source not contains meta fields (apache#6500) Co-authored-by: wangzixuan.wzxuan <[email protected]> * [HUDI-4389] Make HoodieStreamingSink idempotent (apache#6098) * Support checkpoint and idempotent writes in HoodieStreamingSink - Use batchId as the checkpoint key and add to commit metadata - Support multi-writer for checkpoint data model * Walk back previous commits until checkpoint is found * Handle delete operation and fix test * [MINOR] Remove redundant braces (apache#6604) * [HUDI-4618] Separate log word for CommitUitls class (apache#6392) * [HUDI-4776] Fix merge into use unresolved assignment (apache#6589) * [HUDI-4795] Fix KryoException when bulk insert into a not bucket index hudi table Co-authored-by: hbg <[email protected]> * [HUDI-4615] Return checkpoint as null for empty data from events queue. (apache#6387) Co-authored-by: sivabalan <[email protected]> * [HUDI-4782] Support TIMESTAMP_LTZ type for flink (apache#6607) * [HUDI-4731] Shutdown CloudWatch reporter when query completes (apache#6468) * [HUDI-4793] Fixing ScalaTest tests to properly respect Log4j2 configs (apache#6617) * [HUDI-4766] Strengthen flink clustering job (apache#6566) * Allow rollbacks if required during clustering * Allow size to be defined in Long instead of Integer * Fix bug where clustering will produce files of 120MB in the same filegroup * Added clean task * Fix scheduling config to be consistent with that with compaction * Fix filter mode getting ignored issue * Add --instant-time parameter * Prevent no execute() calls exception from being thrown (clustering & compaction) * [HUDI-4797] fix merge into table for source table with different column order (apache#6620) Co-authored-by: zhanshaoxiong <shaoxiong0001@@gmail.com> * [MINOR] Typo fix for kryo in flink-bundle (apache#6639) * [HUDI-4811] Fix the checkstyle of hudi flink (apache#6633) * [HUDI-4465] Optimizing file-listing sequence of Metadata Table (apache#6016) Optimizes file-listing sequence of the Metadata Table to make sure it's on par or better than FS-based file-listing Change log: - Cleaned up avoidable instantiations of Hadoop's Path - Replaced new Path w/ createUnsafePath where possible - Cached TimestampFormatter, DateFormatter for timezone - Avoid loading defaults in Hadoop conf when init-ing HFile reader - Avoid re-instantiating BaseTableMetadata twice w/in BaseHoodieTableFileIndex - Avoid looking up FileSystem for every partition when listing partitioned table, instead do it just once * [HUDI-4807] Use base table instant for metadata initialization (apache#6629) * [HUDI-3453] Fix HoodieBackedTableMetadata concurrent reading issue (apache#5091) Co-authored-by: yuezhang <[email protected]> Co-authored-by: Sagar Sumit <[email protected]> * [HUDI-4518] Add unit test for reentrant lock in diff lockProvider (apache#6624) * [HUDI-4810] Fixing Hudi bundles requiring log4j2 on the classpath (apache#6631) Downgrading all of the log4j2 deps to "provided" scope, since these are not API modules (as advertised), but rather fully-fledged implementations adding dependency on other modules (like log4j2 in the case of "log4j-1.2-api") * [HUDI-4826] Update RemoteHoodieTableFileSystemView to allow base path in UTF-8 (apache#6544) * [HUDI-4763] Allow hoodie read client to choose index (apache#6506) Co-authored-by: Y Ethan Guo <[email protected]> * [DOCS] Fix Slack invite link in README.md (apache#6648) * [HUDI-3558] Consistent bucket index: bucket resizing (split&merge) & concurrent write during resizing (apache#4958) RFC-42 implementation - Implement bucket resizing for consistent hashing index. - Support concurrent write during bucket resizing. This change added tests and can be verified as follows: - The test of the consistent bucket index is enhanced to include the case of bucket resizing. - Tests of different bucket resizing cases. - Tests of concurrent resizing, and concurrent writes during resizing. * [MINOR] Add dev setup and spark 3.3 profile to readme (apache#6656) * [HUDI-4831] Fix AWSDmsAvroPayload#getInsertValue,combineAndGetUpdateValue to invoke correct api (apache#6637) Co-authored-by: Rahil Chertara <[email protected]> * [HUDI-4806] Use Avro version from the root pom for Flink bundle (apache#6628) Co-authored-by: Shawn Chang <[email protected]> * [HUDI-4833] Add Postgres Schema Name to Postgres Debezium Source (apache#6616) * [HUDI-4825] Remove redundant fields in serialized commit metadata in JSON (apache#6646) * [MINOR] Insert should call validateInsertSchema in HoodieFlinkWriteClient (apache#5919) Co-authored-by: 徐帅 <[email protected]> * [HUDI-3879] Suppress exceptions that are not fatal in HoodieMetadataTableValidator (apache#5344) Co-authored-by: yuezhang <[email protected]> Co-authored-by: Y Ethan Guo <[email protected]> * [HUDI-3998] Fix getCommitsSinceLastCleaning failed when async cleaning (apache#5478) - The last completed commit timestamp is used to calculate how many commit have been completed since the last clean. we might need to save this w/ clean plan so that next time when we trigger clean, we can start calculating from that. * [HUDI-3994] - Added support for initializing DeltaStreamer without a defined Spark Master (apache#5630) That will enable the usage of DeltaStreamer on environments such as AWS Glue or other serverless environments where the spark master is inherited and we do not have access to it. Co-authored-by: Angel Conde Manjon <[email protected]> * [HUDI-4628] Hudi-flink support GLOBAL_BLOOM,GLOBAL_SIMPLE,BUCKET index type (apache#6406) Co-authored-by: xiaoxingstack <[email protected]> * [HUDI-4814] Schedules new clustering plan based on latest clustering instant (apache#6574) * Keep a clustering running at the same time * Simplify filtering logic Co-authored-by: dongsj <[email protected]> * [HUDI-4817] Delete markers after full-record bootstrap operation (apache#6667) * [HUDI-4691] Cleaning up duplicated classes in Spark 3.3 module (apache#6550) As part of adding support for Spark 3.3 in Hudi 0.12, a lot of the logic from Spark 3.2 module has been simply copied over. This PR is rectifying that by: 1. Creating new module "hudi-spark3.2plus-common" (that is shared across Spark 3.2 and Spark 3.3) 2. Moving shared components under "hudi-spark3.2plus-common" * [HUDI-4752] Add dedup support for MOR table in cli (apache#6608) * [HUDI-4837] Stop sleeping where it is not necessary after the success (apache#6270) Co-authored-by: Volodymyr Burenin <[email protected]> Co-authored-by: Y Ethan Guo <[email protected]> * [HUDI-4843] Delete the useless timer in BaseRollbackActionExecutor (apache#6671) Co-authored-by: 吴文池 <[email protected]> * [HUDI-4780] hoodie.logfile.max.size It does not take effect, causing the log file to be too large (apache#6602) * hoodie.logfile.max.size It does not take effect, causing the log file to be too large Co-authored-by: [email protected] <loukey_7821> * [HUDI-4844] Skip partition value resolving when the field does not exists for MergeOnReadInputFormat#getReader (apache#6678) * [MINOR] Fix the Spark job status description for metadata-only bootstrap operation (apache#6666) * [HUDI-3403] Ensure keygen props are set for bootstrap (apache#6645) * [HUDI-4193] Upgrade Protobuf to 3.21.5 (apache#5784) * [HUDI-4785] Fix partition discovery in bootstrap operation (apache#6673) Co-authored-by: Y Ethan Guo <[email protected]> * [HUDI-4706] Fix InternalSchemaChangeApplier#applyAddChange error to add nest type (apache#6486) InternalSchemaChangeApplier#applyAddChange forget to remove parent name when calling ColumnAddChange#addColumns * [HUDI-4851] Fixing CSI not handling `InSet` operator properly (apache#6685) * [HUDI-4796] MetricsReporter stop bug (apache#6619) * [HUDI-3861] update tblp 'path' when rename table (apache#5320) * [HUDI-4853] Get field by name for OverwriteNonDefaultsWithLatestAvroPayload to avoid schema mismatch (apache#6689) * [HUDI-4813] Fix infer keygen not work in sparksql side issue (apache#6634) * [HUDI-4813] Fix infer keygen not work in sparksql side issue Co-authored-by: xiaoxingstack <[email protected]> * [HUDI-4856] Missing option for HoodieCatalogFactory (apache#6693) * [HUDI-4864] Fix AWSDmsAvroPayload#combineAndGetUpdateValue when using MOR snapshot query after delete operations with test (apache#6688) Co-authored-by: Rahil Chertara <[email protected]> * [HUDI-4841] Fix sort idempotency issue (apache#6669) * [HUDI-4865] Optimize HoodieAvroUtils#isMetadataField to use O(1) complexity (apache#6702) * [HUDI-4736] Fix inflight clean action preventing clean service to continue when multiple cleans are not allowed (apache#6536) * [HUDI-4842] Support compaction strategy based on delta log file num (apache#6670) Co-authored-by: 苏承祥 <[email protected]> * [HUDI-4282] Repair IOException in CHDFS when check block corrupted in HoodieLogFileReader (apache#6031) Co-authored-by: Y Ethan Guo <[email protected]> * [HUDI-4757] Create pyspark examples (apache#6672) * [HUDI-3959] Rename class name for spark rdd reader (apache#5409) Co-authored-by: Y Ethan Guo <[email protected]> * [HUDI-4828] Fix the extraction of record keys which may be cut out (apache#6650) Co-authored-by: yangshuo3 <[email protected]> Co-authored-by: Y Ethan Guo <[email protected]> * [HUDI-4873] Report number of messages to be processed via metrics (apache#6271) Co-authored-by: Volodymyr Burenin <[email protected]> Co-authored-by: Y Ethan Guo <[email protected]> * [HUDI-4870] Improve compaction config description (apache#6706) * [HUDI-3304] Support partial update payload (apache#4676) Co-authored-by: jian.feng <[email protected]> * [HUDI-4808] Fix HoodieSimpleBucketIndex not consider bucket num in lo… (apache#6630) * [HUDI-4808] Fix HoodieSimpleBucketIndex not consider bucket num in log file issue Co-authored-by: xiaoxingstack <[email protected]> * [HUDI-4485] Bump spring shell to 2.1.1 in CLI (apache#6489) Bumped spring shell to 2.1.1 and updated the default value for show fsview all `pathRegex` parameter. * [minor] following 3304, some code refactoring (apache#6713) * [HUDI-4832] Fix drop partition meta sync (apache#6662) * [HUDI-4810] Fix log4j imports to use bridge API (apache#6710) Co-authored-by: dongsj <[email protected]> * [HUDI-4877] Fix org.apache.hudi.index.bucket.TestHoodieSimpleBucketIndex#testTagLocation not work correct issue (apache#6717) Co-authored-by: xiaoxingstack <[email protected]> * [HUDI-4326] add updateTableSerDeInfo for HiveSyncTool (apache#5920) - This pull request fix [SUPPORT] Hudi spark datasource error after migrate from 0.8 to 0.11 apache#5861* - The issue is caused by after changing the table to spark data source table, the table SerDeInfo is missing. * Co-authored-by: Sagar Sumit <[email protected]> * [MINOR] fix indent to make build pass (apache#6721) * [HUDI-3478] Implement CDC Write in Spark (apache#6697) * [HUDI-4326] Fix hive sync serde properties (apache#6722) * [HUDI-4875] Fix NoSuchTableException when dropping temporary view after applied HoodieSparkSessionExtension in Spark 3.2 (apache#6709) * [DOCS] Improve the quick start guide for Kafka Connect Sink (apache#6708) * [HUDI-4729] Fix file group pending compaction cannot be queried when query _ro table (apache#6516) File group in pending compaction can not be queried when query _ro table with spark. This commit fixes that. Co-authored-by: zhanshaoxiong <shaoxiong0001@@gmail.com> Co-authored-by: Sagar Sumit <[email protected]> * [HUDI-3983] Fix ClassNotFoundException when using hudi-spark-bundle to write table with hbase index (apache#6715) * [HUDI-4758] Add validations to java spark examples (apache#6615) * [HUDI-4792] Batch clean files to delete (apache#6580) This patch makes use of batch call to get fileGroup to delete during cleaning instead of 1 call per partition. This limit the number of call to the view and should fix the trouble with metadata table in context of lot of partitions. Fixes issue apache#6373 Co-authored-by: sivabalan <[email protected]> * [HUDI-4363] Support Clustering row writer to improve performance (apache#6046) * [HUDI-3478][HUDI-4887] Use Avro as the format of persisted cdc data (apache#6734) * [HUDI-4851] Fixing handling of `UTF8String` w/in `InSet` operator (apache#6739) Co-authored-by: Raymond Xu <[email protected]> * [HUDI-3901] Correct the description of hoodie.index.type (apache#6749) * [MINOR] Add .mvn directory to gitignore (apache#6746) Co-authored-by: Rahil Chertara <[email protected]> * add support for unraveling proto schemas * fix some compile issues * [HUDI-4901] Add avro.version to Flink profiles (apache#6757) * Add avro.version to Flink profiles Co-authored-by: Shawn Chang <[email protected]> * [HUDI-4559] Support hiveSync command based on Call Produce Command (apache#6322) * [HUDI-4883] Supporting delete savepoint for MOR (apache#6744) Users could delete unnecessary savepoints and unblock archival for MOR table. * [HUDI-4897] Refactor the merge handle in CDC mode (apache#6740) * [HUDI-3523] Introduce AddColumnSchemaPostProcessor to support add columns to the end of a schema (apache#5031) * Revert "[HUDI-3523] Introduce AddColumnSchemaPostProcessor to support add columns to the end of a schema (apache#5031)" (apache#6768) This reverts commit 092375f. * [HUDI-3523] Introduce AddPrimitiveColumnSchemaPostProcessor to support add new primitive column to the end of a schema (apache#6769) * [HUDI-4903] Fix TestHoodieLogFormat`s minor typo (apache#6762) * [MINOR] Drastically reducing concurrency level (to avoid CI flakiness) (apache#6754) * Update HoodieIndex.java Fix a typo * [HUDI-4906] Fix the local tests for hudi-flink (apache#6763) * [HUDI-4899] Fixing compatibility w/ Spark 3.2.2 (apache#6755) * [HUDI-4892] Fix hudi-spark3-bundle (apache#6735) * [MINOR] Fix a few typos in HoodieIndex (apache#6784) Co-authored-by: xingjunwang <[email protected]> * [HUDI-4412] Fix multi writer INSERT_OVERWRITE NPE bug (apache#6130) There are two minor issues fixed here: 1. When the insert_overwrite operation is performed, the clusteringPlan in the requestedReplaceMetadata will be null. Calling getFileIdsFromRequestedReplaceMetadata will cause NPE. 2. When insert_overwrite operation, inflightCommitMetadata!=null, getOperationType should be obtained from getHoodieInflightReplaceMetadata, the original code will have a null pointer. * [MINOR] retain avro's namespace (apache#6783) * [MINOR] Simple logging fix in LockManager (apache#6765) Co-authored-by: 苏承祥 <[email protected]> * [HUDI-4433] hudi-cli repair deduplicate not working with non-partitioned dataset (apache#6349) When using the repair deduplicate command with hudi-cli, there is no way to run it on the unpartitioned dataset, so modify the cli parameter. Co-authored-by: Xingjun Wang <[email protected]> * [RFC-51][HUDI-3478] Update RFC: CDC support (apache#6256) * [HUDI-4915] improve avro serializer/deserializer (apache#6788) * [HUDI-3478] Implement CDC Read in Spark (apache#6727) * naming and style updates * [HUDI-4830] Fix testNoGlobalConfFileConfigured when add hudi-defaults.conf in default dir (apache#6652) * make test data random, reuse code * [HUDI-4760] Fixing repeated trigger of data file creations w/ clustering (apache#6561) - Apparently in clustering, data file creations are triggered twice since we don't cache the write status and for doing some validation, we do isEmpty on JavaRDD which ended up retriggering the action. Fixing the double de-referencing in this patch. * [HUDI-4914] Managed memory weight should be set when sort clustering is enabled (apache#6792) * [HUDI-4910] Fix unknown variable or type "Cast" (apache#6778) * [HUDI-4918] Fix bugs about when trying to show the non -existing key from env, NullPointException occurs. (apache#6794) * [HUDI-4718] Add Kerberos kinit command support. (apache#6719) * add test for 2 different recursion depths, fix schema cache key * add unsigned long support * better handle other types * rebase on 4904 * get all tests working * fix oneof expected schema, update tests after rebase * [HUDI-4902] Set default partitioner for SIMPLE BUCKET index (apache#6759) * [MINOR] Update PR template with documentation update (apache#6748) * revert scala binary change * try a different method to avoid avro version * [HUDI-4904] Add support for unraveling proto schemas in ProtoClassBasedSchemaProvider (apache#6761) If a user provides a recursive proto schema, it will fail when we write to parquet. We need to allow the user to specify how many levels of recursion they want before truncating the remaining data. Main changes to existing code: ProtoClassBasedSchemaProvider tracks number of times a message descriptor is seen within a branch of the schema traversal once the number of times that descriptor is seen exceeds the user provided limit, set the field to preset record that will contain two fields: 1) the remaining data serialized as a proto byte array, 2) the descriptors full name for context about what is in that byte array Converting from a proto to an avro now accounts for this truncation of the input * delete unused file * [HUDI-4907] Prevent single commit multi instant issue (apache#6766) Co-authored-by: TengHuo <[email protected]> Co-authored-by: yuzhao.cyz <[email protected]> * [HUDI-4923] Fix flaky TestHoodieReadClient.testReadFilterExistAfterBulkInsertPrepped (apache#6801) Co-authored-by: Raymond Xu <[email protected]> * [HUDI-4848] Fixing repair deprecated partition tool (apache#6731) * [HUDI-4913] Fix HoodieSnapshotExporter for writing to a different S3 bucket or FS (apache#6785) * address PR feedback, update decimal precision * fix isNullable issue, check if class is Int64value * checkstyle fix * change wrapper descriptor set initialization * add in testing for unsigned long to BigInteger conversion * [HUDI-4453] Fix schema to include partition columns in bootstrap operation (apache#6676) Turn off the type inference of the partition column to be consistent with existing behavior. Add notes around partition column type inference. * [HUDI-2780] Fix the issue of Mor log skipping complete blocks when reading data (apache#4015) Co-authored-by: huangjing02 <[email protected]> Co-authored-by: sivabalan <[email protected]> * [HUDI-4924] Auto-tune dedup parallelism (apache#6802) * [HUDI-4687] Avoid setAccessible which breaks strong encapsulation (apache#6657) Use JOL GraphLayout for estimating deep size. * [MINOR] fixing validate async operations to poll completed clean instances (apache#6814) * [HUDI-4734] Deltastreamer table config change validation (apache#6753) Co-authored-by: sivabalan <[email protected]> * [HUDI-4934] Revert batch clean files (apache#6813) * Revert "[HUDI-4792] Batch clean files to delete (apache#6580)" This reverts commit cbf9b83. * [HUDI-4722] Added locking metrics for Hudi (apache#6502) * [HUDI-4936] Fix `as.of.instant` not recognized as hoodie config (apache#5616) Co-authored-by: leon <[email protected]> Co-authored-by: Raymond Xu <[email protected]> * [HUDI-4861] Relaxing `MERGE INTO` constraints to permit limited casting operations w/in matched-on conditions (apache#6820) * [HUDI-4885] Adding org.apache.avro to hudi-hive-sync bundle (apache#6729) * [HUDI-4951] Fix incorrect use of Long.getLong() (apache#6828) * [MINOR] Use base path URI in ITTestDataStreamWrite (apache#6826) * [HUDI-4308] READ_OPTIMIZED read mode will temporary loss of data when compaction (apache#6664) Co-authored-by: Y Ethan Guo <[email protected]> * [HUDI-4237] Fixing empty partition-values being sync'd to HMS (apache#6821) Co-authored-by: dujunling <[email protected]> Co-authored-by: Raymond Xu <[email protected]> * [HUDI-4925] Should Force to use ExpressionPayload in MergeIntoTableCommand (apache#6355) Co-authored-by: jian.feng <[email protected]> * [HUDI-4850] Add incremental source from GCS to Hudi (apache#6665) Adds an incremental source from GCS based on a similar design as https://hudi.apache.org/blog/2021/08/23/s3-events-source * [HUDI-4957] Shade JOL in bundles to fix NoClassDefFoundError:GraphLayout (apache#6839) * [HUDI-4718] Add Kerberos kdestroy command support (apache#6810) * [HUDI-4916] Implement change log feed for Flink (apache#6840) * [HUDI-4769] Option read.streaming.skip_compaction skips delta commit (apache#6848) * [HUDI-4949] optimize cdc read to avoid the problem of reusing buffer underlying the Row (apache#6805) * [HUDI-4966] Add a partition extractor to handle partition values with slashes (apache#6851) * [MINOR] Fix testUpdateRejectForClustering (apache#6852) * [HUDI-4962] Move cloud dependencies to cloud modules (apache#6846) * [HOTFIX] Fix source release validate script (apache#6865) * [HUDI-4980] Calculate avg record size using commit only (apache#6864) Calculate average record size for Spark upsert partitioner based on commit instants only. Previously it's based on commit and replacecommit, of which the latter may be created by clustering which has inaccurately smaller average record sizes, which could result in OOM due to size underestimation. * shade protobuf dependency * Revert "[HUDI-4915] improve avro serializer/deserializer (apache#6788)" (apache#6809) This reverts commit 79b3e2b. * [HUDI-4970] Update kafka-connect readme and refactor HoodieConfig#create (apache#6857) * Enhancing README for multi-writer tests (apache#6870) * [MINOR] Fix deploy script for flink 1.15 (apache#6872) * [HUDI-4992] Fixing invalid min/max record key stats in Parquet metadata (apache#6883) * Revert "shade protobuf dependency" This reverts commit f03f961. * [HUDI-4972] Fixes to make unit tests work on m1 mac (apache#6751) * [HUDI-2786] Docker demo on mac aarch64 (apache#6859) * [HUDI-4971] Fix shading kryo-shaded with reusing configs (apache#6873) * [HUDI-3900] [UBER] Support log compaction action for MOR tables (apache#5958) - Adding log compaction support to MOR table. subsequent log blocks can now be compacted into larger log blocks without needing to go for full compaction (by merging w/ base file). - New timeline action is introduced for the purpose. Co-authored-by: sivabalan <[email protected]> * Relocate apache http package (apache#6874) * [HUDI-4975] Fix datahub bundle dependency (apache#6896) * [HUDI-4999] Refactor FlinkOptions#allOptions and CatalogOptions#allOptions (apache#6901) * [MINOR] Update GitHub setting for merge button (apache#6922) Only allow squash and merge. Disable merge and rebase * [HUDI-4993] Make DataPlatform name and Dataset env configurable in DatahubSyncTool (apache#6885) * [MINOR] Fix name spelling for RunBootstrapProcedure * [HUDI-4754] Add compliance check in github actions (apache#6575) * [HUDI-4963] Extend InProcessLockProvider to support multiple table ingestion (apache#6847) Co-authored-by: rmahindra123 <[email protected]> * [HUDI-4994] Fix bug that prevents re-ingestion of soft-deleted Datahub entities (apache#6886) * Implement Create/Drop/Show/Refresh Secondary Index (apache#5933) * remove oss pr compliance * different approach for shutdown all metrics instances * remove flink testing, update metrics shutdown Co-authored-by: Qi Ji <[email protected]> Co-authored-by: wuwenchi <[email protected]> Co-authored-by: 吴文池 <[email protected]> Co-authored-by: Sagar Sumit <[email protected]> Co-authored-by: Danny Chan <[email protected]> Co-authored-by: Nicholas Jiang <[email protected]> Co-authored-by: Y Ethan Guo <[email protected]> Co-authored-by: Alexey Kudinkin <[email protected]> Co-authored-by: 董可伦 <[email protected]> Co-authored-by: 冯健 <[email protected]> Co-authored-by: jian.feng <[email protected]> Co-authored-by: hehuiyuan <[email protected]> Co-authored-by: Zouxxyy <[email protected]> Co-authored-by: Manu <[email protected]> Co-authored-by: shaoxiong.zhan <[email protected]> Co-authored-by: zhanshaoxiong <shaoxiong0001@@gmail.com> Co-authored-by: Sivabalan Narayanan <[email protected]> Co-authored-by: Shiyan Xu <[email protected]> Co-authored-by: Yann Byron <[email protected]> Co-authored-by: KnightChess <[email protected]> Co-authored-by: Teng <[email protected]> Co-authored-by: leandro-rouberte <[email protected]> Co-authored-by: Jon Vexler <[email protected]> Co-authored-by: smilecrazy <[email protected]> Co-authored-by: xxhua <[email protected]> Co-authored-by: YueZhang <[email protected]> Co-authored-by: yuezhang <[email protected]> Co-authored-by: HunterXHunter <[email protected]> Co-authored-by: komao <[email protected]> Co-authored-by: wangzixuan.wzxuan <[email protected]> Co-authored-by: felixYyu <[email protected]> Co-authored-by: Bingeng Huang <[email protected]> Co-authored-by: hbg <[email protected]> Co-authored-by: Vinish Reddy <[email protected]> Co-authored-by: junyuc25 <[email protected]> Co-authored-by: voonhous <[email protected]> Co-authored-by: Xingcan Cui <[email protected]> Co-authored-by: Yuwei XIAO <[email protected]> Co-authored-by: wangp-nhlab <[email protected]> Co-authored-by: Nicolas Paris <[email protected]> Co-authored-by: Rahil C <[email protected]> Co-authored-by: Rahil Chertara <[email protected]> Co-authored-by: Shawn Chang <[email protected]> Co-authored-by: Shawn Chang <[email protected]> Co-authored-by: Abhishek Modi <[email protected]> Co-authored-by: shuai.xu <[email protected]> Co-authored-by: 徐帅 <[email protected]> Co-authored-by: Angel Conde <[email protected]> Co-authored-by: Angel Conde Manjon <[email protected]> Co-authored-by: FocusComputing <[email protected]> Co-authored-by: xiaoxingstack <[email protected]> Co-authored-by: eric9204 <[email protected]> Co-authored-by: dongsj <[email protected]> Co-authored-by: Volodymyr Burenin <[email protected]> Co-authored-by: Volodymyr Burenin <[email protected]> Co-authored-by: luokey <[email protected]> Co-authored-by: Sylwester Lachiewicz <[email protected]> Co-authored-by: 苏承祥 <[email protected]> Co-authored-by: 苏承祥 <[email protected]> Co-authored-by: 5herhom <[email protected]> Co-authored-by: Jon Vexler <[email protected]> Co-authored-by: simonsssu <[email protected]> Co-authored-by: y0908105023 <[email protected]> Co-authored-by: yangshuo3 <[email protected]> Co-authored-by: Paul Zhang <[email protected]> Co-authored-by: Kyle Zhike Chen <[email protected]> Co-authored-by: dohongdayi <[email protected]> Co-authored-by: RexAn <[email protected]> Co-authored-by: ForwardXu <[email protected]> Co-authored-by: wangxianghu <[email protected]> Co-authored-by: wulei <[email protected]> Co-authored-by: Xingjun Wang <[email protected]> Co-authored-by: Prasanna Rajaperumal <[email protected]> Co-authored-by: xingjunwang <[email protected]> Co-authored-by: liujinhui <[email protected]> Co-authored-by: ChanKyeong Won <[email protected]> Co-authored-by: Forus <[email protected]> Co-authored-by: hj2016 <[email protected]> Co-authored-by: huangjing02 <[email protected]> Co-authored-by: jsbali <[email protected]> Co-authored-by: Leon Tsao <[email protected]> Co-authored-by: leon <[email protected]> Co-authored-by: 申胜利 <[email protected]> Co-authored-by: aiden.dong <[email protected]> Co-authored-by: dujunling <[email protected]> Co-authored-by: Pramod Biligiri <[email protected]> Co-authored-by: Zouxxyy <[email protected]> Co-authored-by: Alexey Kudinkin <[email protected]> Co-authored-by: Surya Prasanna <[email protected]> Co-authored-by: Rajesh Mahindra <[email protected]> Co-authored-by: rmahindra123 <[email protected]> Co-authored-by: huberylee <[email protected]>
Tips
What is the purpose of the pull request
This PR optimizes file-listing sequence of the Metadata Table to make sure it's on par or better than FS-based file-listing
Change log:
Path
new Path
w/createUnsafePath
where possibleTimestampFormatter
,DateFormatter
for timezoneBaseTableMetadata
twice w/inBaseHoodieTableFileIndex
Brief change log
See above
Verify this pull request
This pull request is already covered by existing tests, such as (please describe tests).
Committer checklist
Has a corresponding JIRA in PR title & commit
Commit message is descriptive of the change
CI is green
Necessary doc changes done or have another open PR
For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.