Update module github.com/twmb/franz-go to v1.18.0 #32
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v1.16.1
->v1.18.0
Release Notes
twmb/franz-go (github.com/twmb/franz-go)
v1.18.0
Compare Source
===
This release adds support for Kafka 3.7, adds a few community requested APIs,
some internal improvements, and fixes two bugs. One of the bugfixes is for a
deadlock; it is recommended to bump to this release to ensure you do not run
into the deadlock. The features in this release are relatively small.
This adds protocol support for KIP-890 and KIP-994, and
adds further protocol support for [KIP-848][KIP-848]. If you are using
transactions, you may see a new
kerr.TransactionAbortable
error, whichsignals that your ongoing transaction should be aborted and will not be
successful if you try to commit it.
Lastly, there have been a few improvements to
pkg/sr
that are not mentionedin these changelog notes.
Bug fixes
If you canceled the context used while producing while your client was
at the maximum buffered records or bytes, it was possible to experience
deadlocks. This has been fixed. See #832 for more details.
Previously, if using
GetConsumeTopics
while regex consuming, the functionwould return all topics ever discovered. It now returns only the topics that
are being consumed.
Improvements
encountered when consuming if possible. If a producer produces very infrequently,
it is possible the broker forgets the producer by the next time the producer
produces. In this case, the producer receives an OutOfOrderSequenceNumber error.
The client now internally resets properly so that you do not see the error.
Features
AllowRebalance
andCloseAllowingRebalance
have been added toGroupTransactSession
.FetchTopic
type now has includes the topic'sTopicID
.ErrGroupSession
internal error field is now public, allowing you to test how you handle the internal error.kerr.TransactionAbortable
error from many functions while using transactions.Relevant commits
0fd1959d
kgo: support Kafka 3.8's kip-890 modifications68163c55
bugfix kgo: do not add all topics to internal tps map when regex consuming3548d1f7
improvement kgo: ignore OOOSN where possible6a759401
bugfix kgo: fix potential deadlock when reaching max buffered (records|bytes)4bfb0c68
feature kgo: add TopicID to the FetchTopic type06a9c47d
feature kgo: export the wrapped error from ErrGroupSession4affe8ef
feature kgo: add AllowRebalance and CloseAllowingRebalance to GroupTransactSessionv1.17.1
Compare Source
===
This patch release fixes four bugs (two are fixed in one commit), contains two
internal improvements, and adds two other minor changes.
Bug fixes
If you were using the
MaxBufferedBytes
option and ever hit the max, odds arelikely that you would experience a deadlock eventually. That has been fixed.
If you ever produced a record with no topic field and without using
DefaultProduceTopic
,or if you produced a transactional record while not in a transaction, AND if the client
was at the maximum buffered records, odds are you would eventually deadlock.
This has been fixed.
It was previously not possible to set lz4 compression levels.
There was a data race on a boolean field if a produce request was being
written at the same time a metadata update happened, and if the metadata
update has an error on the topic or partition that is actively being written.
Note that the race was unlikely and if you experienced it, you would have noticed
an OutOfOrderSequenceNumber error. See this comment
for more details.
Improvements
Canceling the context you pass to
Produce
now propagates in two more areas:the initial
InitProducerID
request that occurs the first time you produce,and if the client is internally backing off due to a produce request failure.
Note that there is no guarantee on which context is used for cancelation if
you produce many records, and the client does not allow canceling if it is
currently unsafe to do so. However, this does mean that if your cluster is
somewhat down such that
InitProducerID
is failing on your new client, youcan now actually cause the
Produce
to quit. See this commentfor what it means for a record to be "safe" to fail.
The client now ignores aborted records while consuming only if you have
configured
FetchIsolationLevel(ReadCommitted())
. Previously, the client reliedentirely on the
FetchResponse
AbortedTransactions
field, but it's possiblethat brokers could send aborted transactions even when not using read committed.
Specifically, this was a behavior difference in Redpanda, and the KIP that introduced
transactions and all relevant documents do not mention what the broker behavior
actually should be here. Redpanda itself was also changed to not send aborted
transactions when using read committed, but we may as well improve franz-go as well.
Decompression now better reuses buffers under the hood, reducing allocations.
Brokers that return preferred replicas to fetch from now causes an info level
log in the client.
Relevant commits
305d8dc
kgo: allow record ctx cancelation to propagate a bit more24fbb0f
bugfix kgo: fix deadlock in Produce when using MaxBufferedBytes1827add
bugfix kgo sink: fix read/write race for recBatch.canFailFromLoadErrsd7ea2c3
bugfix fix setting lz4 compression levels (thanks @asg0451!)5809dec
optimise: use byteBuffer pool in decompression (thanks @kalbhor!)cda897d
kgo: add log for preferred replicase62b402
improvement kgo sink: do not back off on certain edge case9e32bf9
kgo: ignore aborted txns if usingREAD_UNCOMMITTED
v1.17.0
Compare Source
===
This long-coming release, four months after v1.16.0, adds support for Kafka 3.7
and adds a few community added or requested APIs. There will be a kadm release
shortly following this one, and maybe a plugin release.
This adds full support for KIP-951, as well as protocol support for
KIP-919 (which has no client facing features) and KIP-848
(protocol only, not the feature!). KIP-951 should make the client faster at
handling when the broker moves partition leadership to a different broker.
There are two fairly minor bug fixes in the kgo package in this release, both
described below. There is also one bugfix in the pkg/sr independent (and
currently) untagged module. Because pkg/sr is untagged, the bugfix was released
a long time ago, but the relevant commit is still mentioned below.
Bug fixes
Previously, upgrading a consumer group from non-cooperative to cooperative
while the group was running did not work. This is now fixed (by @hamdanjaveed, thank you!).
Previously, if a cooperative consumer group member rebalanced while fetching
offsets for partitions, if those partitions were not lost in the rebalance,
the member would call OnPartitionsAssigned with those partitions again.
Now, those partitions are passed to OnPartitionsAssigned only once (the first time).
Improvements
The client will now stop lingering if you hit max buffered records or bytes.
Previously, if your linger was long enough, you could stall once you hit
either of the Max options; that is no longer the case.
If you are issuing admin APIs on the same client you are using for consuming
or producing, you may see fewer metadata requests being issued.
There are a few other even more minor improvements in the commit list if you
wish to go spelunking :).
Features
The
Offset
type now has a new methodAtCommitted()
, which causes theconsumer to not fetch any partitions that do not have a previous commit.
This mirrors Kafka's
auto.offset.reset=none
option.KIP-951, linked above and the commit linked below, improves latency around
partition leader transfers on brokers.
Client.GetConsumeTopics
allows you to query what topics the client iscurrently consuming. This may be useful if you are consuming via regex.
Client.MarkCommitOffsets
allows you to mark offsets to be committed inbulk, mirroring the non-mark API
CommitOffsets
.Relevant commits
franz-go
a7caf20
feature kgo.Offset: add AtCommitted()55dc7a0
bugfix kgo: re-add fetch-canceled partitions AFTER the user callbackdb24bbf
improvement kgo: avoid / wakeup lingering if we hit max bytes or max records993544c
improvement kgo: Optimistically cache mapped metadata when cluster metadata is periodically refreshed (thanks @pracucci!)1ed02eb
feature kgo: add support for KIP-9512fbbda5
bugfix fix: clear lastAssigned when revoking eager consumerd9c1a41
pkg/kerr: add new errors54d3032
pkg/kversion: add 3.7892db71
pkg/sr bugfix sr SubjectVersions calls pathSubjectVersioned26ed0
feature kgo: adds Client.GetConsumeTopics (thanks @UnaffiliatedCode!)929d564
feature kgo: adds Client.MarkCommitOffsets (thanks @sudo-sturbia!)kfake
kfake as well has a few improvements worth calling out:
18e2cc3
kfake: support committing to non-existing groupsb05c3b9
kfake: support KIP-951, fix OffsetForLeaderEpoch5d8aa1c
kfake: fix handling ListOffsets with requested timestampConfiguration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.