Skip to content
This repository has been archived by the owner on Sep 11, 2024. It is now read-only.

Clearing record grouper on exceptions to avoid duplicates #227

Merged
merged 1 commit into from
Nov 28, 2022

Conversation

AnatolyPopov
Copy link
Contributor

@AnatolyPopov AnatolyPopov commented Nov 28, 2022

When the exception is thrown during flush(e.g. network error), Kafka connects rewinds the offsets to last committed and tries to commit current offsets once again. This causes duplicates in the connector since the offsets are now cached in record grouper. Cleaning the record grouper on exception solves the issues.

@AnatolyPopov AnatolyPopov requested review from a team as code owners November 28, 2022 10:31
@ivanyu ivanyu merged commit 6291fcb into main Nov 28, 2022
@ivanyu ivanyu deleted the anatolii/duplicate-records-fix branch November 28, 2022 11:30
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants