-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable BigQuery CDC configuration for Python BigQuery sink #32529
base: master
Are you sure you want to change the base?
Enable BigQuery CDC configuration for Python BigQuery sink #32529
Conversation
fixes #32527 |
I have a question @ahmedabu98, how to make sure the xlang tests for BQ have run? This branch test output for the test looks pretty similar to those that ran in master the last time, but in both cases it seems that the BQ StorageWrite transform is not being registered. Am I looking at the right place or are the IT for xlang run in some other different task? |
The test you are referencing is to generate wrappers for external transforms (unrelated here since this wrapper is handwritten). The workflows testing GCP xlang IOs are in beam_PostCommit_Python_Xlang_Gcp_Dataflow and beam_PostCommit_Python_Xlang_Gcp_Direct. These are post commits so to trigger them you just need to modify their respective target files and push it to this branch: |
…plemented table constraint support on the bigquery fake dataset services
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some more comments
"In the case of using CDC writes and setting CREATE_IF_NEEDED mode for the tables" | ||
+ " a primary key is required.") | ||
@Nullable | ||
public abstract List<String> getCdcWritesPrimaryKey(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
"In the case of using CDC writes and setting CREATE_IF_NEEDED mode for the tables" | |
+ " a primary key is required.") | |
@Nullable | |
public abstract List<String> getCdcWritesPrimaryKey(); | |
"If CREATE_IF_NEEDED disposition is set, BigQuery table(s) will be created with this primary key. " | |
+ "Required when CDC writes are enabled with CREATE_IF_NEEDED.") | |
@Nullable | |
public abstract List<String> getPrimaryKey(); |
if (!Strings.isNullOrEmpty(configuration.getCreateDisposition())) { | ||
checkArgument( | ||
BigQueryStorageWriteApiSchemaTransformConfiguration.CREATE_DISPOSITIONS | ||
.get(configuration.getCreateDisposition().toUpperCase()) | ||
.equals(CreateDisposition.CREATE_IF_NEEDED) | ||
&& !Optional.ofNullable(configuration.getCdcWritesPrimaryKey()) | ||
.orElse(ImmutableList.of()) | ||
.isEmpty(), | ||
"When using CDC writes into BigQuery, alongside with CREATE_IF_NEEDED mode," | ||
+ " a primary key should be provided."); | ||
} | ||
if (configuration.getTable().equals(DYNAMIC_DESTINATIONS)) { | ||
checkArgument( | ||
schema.getFieldNames().contains("destination"), | ||
"When writing to dynamic destinations, we expect Row Schema with a " | ||
+ "\"destination\" string field."); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's lean on the existing checks -- I don't think we need to create new ones. The first check is already covered in BigQueryIO, and the second check is covered above in this file
write = | ||
write | ||
.to(dynamicDestination) | ||
.to(new RowDynamicDestinations(schema.getField("record").getType().getRowSchema())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just instantiate new RowDynamicDestinations(<row schema>, <primary key>)
here? And avoid instantiating it again in validateAndIncludeCDCInformation?
} else if (Optional.ofNullable(configuration.getUseCdcWrites()).orElse(false)) { | ||
write = validateAndIncludeCDCInformation(write, schema); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this a separate if
block, outside of this if/else chain? We should be able to apply this method to both dynamic destination and single table cases. The only factor should be whether or not useCdcWrites
is true
RowDynamicDestinations destinations = | ||
new RowDynamicDestinations(schema.getField("record").getType().getRowSchema()) | ||
.withPrimaryKey(configuration.getCdcWritesPrimaryKey()); | ||
if (!configuration.getTable().equals(DYNAMIC_DESTINATIONS)) { | ||
destinations = destinations.withFixedDestination(configuration.getTable()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we remove this duplicated code? (see previous comment)
BigQueryStorageWriteApiSchemaTransformConfiguration.builder() | ||
.setTable(dynamic) | ||
.setUseCdcWritesWithPrimaryKey(primaryKeyColumns) | ||
.setUseCdcWrites(true) | ||
.setCdcWritesPrimaryKey(primaryKeyColumns) | ||
.build(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why this is possible without explicitly setting "at-least-once" mode
use_cdc_writes: Configure the usage of CDC writes on BigQuery. | ||
The argument can be used by passing True and the Beam Rows will be | ||
sent as they are to the BigQuery sink which expects a 'record' | ||
and 'cdc_info' properties. | ||
Used for STORAGE_WRITE_API, working on 'at least once' mode. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"If True, your input elements are expected to have a 'record' field representing the record to write, and a 'cdc_info: {mutation_type: , change_sequence_number: }' field representing the mutation information."
cdc_writes_primary_key: When using CDC write on BigQuery and | ||
CREATE_IF_NEEDED mode for the underlying tables a list of column names | ||
is required to be configured as the primary key. Used for | ||
STORAGE_WRITE_API. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
STORAGE_WRITE_API. | |
STORAGE_WRITE_API and at_least_once mode. |
use_cdc_writes=lambda row: beam.Row( | ||
mutation_type="UPSERT", | ||
change_sequence_number="AAA/" + str(row.value)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that when the user is writing Python dicts, they should be able to supply a function that works on those dicts (ie. they shouldn't have to know what a Beam Row). Most Python users are not aware that Beam Rows and the Java IO are being used under the hood.
use_cdc_writes = False | ||
# if CDC functionality is configured we need to check if a callable has | ||
# been passed to extract MutationInfo from the rows to be written | ||
if callable(self._use_cdc_writes): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm realizing that this callable CDC writes option is more complex than I thought. For it to be complete and not confusing for users, we will have to provide this logic for both Python dict inputs and Beam Row inputs
I suggest we keep the CDC option to a boolean and make this improvement in a future PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me try to have it implemented for both in this change, if you still see it incomplete I will remove it after the next review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed Callable
argument for now, will work on it on a separated PR.
… callable on a future PR
Run Java_GCP_IO_Direct PreCommit |
Enables the configuration of CDC writes into a BigQuery table by setting the primary key columns to be used in the row mutations. This change adds the configuration of CDC usage for the
BigQueryStorageWriteApiSchemaTransformProvider
.By setting
configuration.setUseCDCWritesWithPrimaryKey(List.of("col1", "col2"))
on the provider's config, the transform creation will create aBigQuery.Write<Row>
transform that will configure the right row mutation information by checking for a Row schema like:The implementation also enables the possibility of setting a dynamic destination alongside with CDC usage by using this Row schema:
Note: In case of using dynamic destination and CDC is only supported when all the destination share the same primary key columns.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.