forked from snowflakedb/snowflake-kafka-connector
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Snowflake 2.0.0 Upgrade #6
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…/streamkap-com/snowflake-kafka-connector into streaming-no-auto-schematization
Semantic Time data type should map to the Snowflake's equivalent type not Timestamp. Semantic ZonedTime data type doesn't have an equivalent so using the next closest type.
This reverts commit 8af782d.
…emporal-types-issue STR-783 Snowflake 1.9.2 Upgrade & STR-877 Temporal type mapping fix
… be more consistent (snowflakedb#639)
…lakedb#656) Co-authored-by: Enzo Cappa <[email protected]> Co-authored-by: Enzo Cappa <[email protected]>
Miscommunication on this PR comment on what test needed to be removed. Adding e2e test back
…chematization is enabled (snowflakedb#658) Looks like we have a gap in KC that may skip ingesting some offsets, consider this case where you have two topics with different schemas trying to ingest into the same table, internally KC will create two channels (channel A and channel B) with offset_token=NULL, then both channels start to buffer data and flush files, but channel A fails committing the first batch because the file schema doesn't match the latest table schema due to schema evolution, then channel A will be invalidated and reopened but we won't reset the consumer offset because the offset_token for channel A is still NULL, we say that we will rely on Kafka to send us the correct data when the offset_token is NULL, so in this case Kafka will continue send us the next batch and we will accept it, this means that the first batch for channel A will be skipped forever. We need to rethink about what we need to do when the offset_token for a channel is NULL and I don't think we can purely rely on Kafka to resend us the correct offset. The fix is to manage the Kafka consumer offset in the connector as well and use that to reset Kafka when the offset token for a channel is NULL instead of relying on Kafka to send us the correct offset
- No conflicts - Added toolchains for help with building the Connector using specific JDK versions
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Skipped 1.9.4 upgrade as 2.0.0 reverts a fix in 1.9.4 that affects channel opening/closing which we've seen warnings and exceptions for in earlier versions.