Skip to content
This repository has been archived by the owner on Apr 1, 2024. It is now read-only.

Bump pulsar version to 3.2.0-SNAPSHOT #5902

Closed
wants to merge 60 commits into from

Conversation

streamnativebot
Copy link

This is a PR created by snbot to trigger the check suite in each repository.

RobertIndie and others added 30 commits August 30, 2023 00:23
…che#21070)

### Motivation

Current, when the producer resend the chunked message like this:
- M1: UUID: 0, ChunkID: 0
- M2: UUID: 0, ChunkID: 0 // Resend the first chunk
- M3: UUID: 0, ChunkID: 1

When the consumer received the M2, it will find that it's already tracking the UUID:0 chunked messages, and will then discard the message M1 and M2. This will lead to unable to consume the whole chunked message even though it's already persisted in the Pulsar topic.

Here is the code logic:
https://github.com/apache/pulsar/blob/44a055b8a55078bcf93f4904991598541aa6c1ee/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ConsumerImpl.java#L1436-L1482

The bug can be easily reproduced using the testcase `testResendChunkMessages` introduced by this PR.


### Modifications

- When receiving the new duplicated first chunk of a chunked message, the consumer discard the current chunked message context and create a new context to track the following messages. For the case mentioned in Motivation, the M1 will be released and the consumer will assemble M2 and M3 as the chunked message.
)

Motivation: When deleting a namespace, we will delete znode under the path `/loadbalance/bundle-data` in `local metadata store` instead of `global metadata store`.

Modifications: Delete bundle data znode in local metadata store.
…ache#20948)

## Motivation
Make the chunk message function work properly when deduplication is enabled.
## Modification
### Only check and store the sequence ID of the last chunk in a chunk message.
 For example:
 ```markdown
     Chunk-1 sequence ID: 0, chunk ID: 0, total chunk: 2
     Chunk-2 sequence ID: 0, chunk ID: 1
     Chunk-3 sequence ID: 1, chunk ID: 0 total chunk: 3
     Chunk-4 sequence ID: 1, chunk ID: 1
     Chunk-5 sequence ID: 1, chunk ID: 1
     Chunk-6 sequence ID: 1, chunk ID: 2
```   
Only store check and store the sequence ID of Chunk-2 and Chunk-6.
**Add a property in the publishContext to determine whether this chunk is the last chunk when persistent completely.**
```java
publishContext.setProperty(IS_LAST_CHUNK, Boolean.FALSE);
```
### Filter and ack duplicated chunks in a chunk message instead of discarding ctx.
 For example:
 ```markdown
     Chunk-1 sequence ID: 0, chunk ID: 0, msgID: 1:1
     Chunk-2 sequence ID: 0, chunk ID: 1, msgID: 1:2
     Chunk-3 sequence ID: 0, chunk ID: 2, msgID: 1:3
     Chunk-4 sequence ID: 0, chunk ID: 1, msgID: 1:4
     Chunk-5 sequence ID: 0, chunk ID: 2, msgID: 1:5
     Chunk-6 sequence ID: 0, chunk ID: 3, msgID: 1:6
```   
We should filter and ack chunk-4 and chunk-5.
Signed-off-by: tison <[email protected]>
Co-authored-by: Alexander Preuß <[email protected]>
Co-authored-by: tison <[email protected]>
… was failed (apache#20935)

The progress Persist mark deleted position is like this:
- persist to BK
- If failed to persist to BK, try to persist to ZK

But in the current implementation: if the cursor ledger was created failed, Pulsar will not try to persist to ZK. It makes if the cursor ledger created fails, a lot of ack records can not be persisted, and we will get a lot of repeat consumption after the BK recover.

Modifications: Try to persist the mark deleted position to ZK if the cursor ledger was created failed
…and schema. (apache#21093)

Fixes apache#21075 

### Motivation

When the topic is loaded, it will delete the topic-level policy if it is enabled. But if the topic is not loaded, it will directly delete through managed ledger factory. But then we will leave the topic policy there. When the topic is created next time, it will use the old topic policy

### Modifications

When deleting the topic, delete the schema and topic policies even if the topic is not loaded.
## Motivation
Handle ack hole case:
For example:
```markdown
                     Chunk-1 sequence ID: 0, chunk ID: 0, msgID: 1:1
                     Chunk-2 sequence ID: 0, chunk ID: 1, msgID: 1:2
                     Chunk-3 sequence ID: 0, chunk ID: 0, msgID: 1:3
                     Chunk-4 sequence ID: 0, chunk ID: 1, msgID: 1:4
                     Chunk-5 sequence ID: 0, chunk ID: 2, msgID: 1:5
```
 Consumer ack chunk message via ChunkMessageIdImpl that consists of all the chunks in this chunk
 message(Chunk-3, Chunk-4, Chunk-5). The Chunk-1 and Chunk-2 are not included in the
 ChunkMessageIdImpl, so we should process it here.
## Modification
Ack chunk-1 and chunk-2.
### Modifications
When upgraded the pulsar version from 2.9.2 to 2.10.3, and the isolated group feature not work anymore.
Finally, we found the problem. In IsolatedBookieEnsemblePlacementPolicy, when it gets the bookie rack from the metadata store cache, uses future.isDone() to avoid sync operation. If the future is incomplete, return empty blacklists. 
The cache may expire due to the caffeine cache `getExpireAfterWriteMillis` config, if the cache expires, the future may be incomplete. (apache#21095 will correct the behavior)

In 2.9.2, it uses the sync to get data from the metadata store, we should also keep the behavior.
Technoboy- and others added 27 commits September 6, 2023 22:06
…n per broker (apache#21144)

Motivation: Pulsar has two mechanisms to guarantee that a producer connects to the broker multiple times the result is still correct.

- In a connection, the second connection waits for the first connection to complete.
- In a topic, the second connection will override the previous one.

However, if a producer can use different connections to connect to the broker, these two mechanisms will not work.

When the config `connectionsPerBroker` of `PulsarClient` is larger than `1`, a producer could use more than one connection, leading to the error above. You can reproduce this issue by the test `testSelectConnectionForSameProducer.`

Modifications: Make the same producer/consumer usage the same connection
…21051)

pip: apache#21052
### Motivation

Introduce the `getLastMessageIds` API to Reader.

### Modifications

Implement getLastMessageIds API for Reader
apache#21155)

#### Issue 1
The client assumed the connection was inactive, but the Broker assumed the connection was fine. The Client tried to  use a new connection to reconnect a producer, then got an error `Producer with name 'st-0-5' is already connected to topic`.

#### Issue 2
- In a connection, the second connection waits for the first connection to complete\. But there is a bug that causes this mechanism to fail\. 
- If a producer uses a default name, the second registration will override the first one. But it can not override the first one if it uses a specified producer name\. I think this mechanism is to prevent a client from creating two producers with the same name. However, method `Producer.isSuccessorTo` has checked the `producer-id`, and the `producer-id` of multiple producers created by the same client are different. So this mechanism can be deleted.

### Modifications

- For `issue 1`: If a producer with the same name tries to use a new connection, async checks the old connection is available. The producers related to the connection that is not available are automatically cleaned up.

- For `issue 2`:
  -  Fix the bug that causes a complete producer future will be removed from `ServerCnx`.
  - Remove the mechanism that prevents a producer with a specified name from overriding the previous producer.
…the fatal exception (apache#21143)

PIP: apache#21079 

### Motivation

Currently, the connector and function cannot terminate the function instance if there are fatal exceptions thrown
outside the function instance thread. The current implementation of the connector and Pulsar Function exception handler
cannot handle the fatal exceptions that are thrown outside the function instance thread.

For example, suppose we have a sink connector that uses its own threads to batch-sink the data to an external system. If
any fatal exceptions occur in those threads, the function instance thread will not be aware of them and will
not be able to terminate the connector. This will cause the connector to hang indefinitely. There is a related issue
here: apache#9464

The same problem exists for the source connector. The source connector may also use a separate thread to fetch data from
an external system. If any fatal exceptions happen in that thread, the connector will also hang forever. This issue has
been observed for the Kafka source connector: apache#9464. We have fixed it by adding
the notifyError method to the `PushSource` class in PIP-281: apache#20807. However, this
does not solve the same problem that all source connectors face because not all connectors are implemented based on
the `PushSource` class.

The problem is same for the Pulsar Function. Currently, the function can't throw fatal exceptions to the function
framework. We need to provide a way for the function developer to implement it.

We need a way for the connector and function developers to throw fatal exceptions outside the function instance
thread. The function framework should catch these exceptions and terminate the function accordingly.

### Modifications

 Introduce a new method `fatal` to the context. All the connector implementation code and the function code 
 can use this context and call the `fatal` method to terminate the instance while raising a fatal exception. 
  
 After the connector or function raises the fatal exception, the function instance thread will be interrupted. 
 The function framework then could catch the exception, log it, and then terminate the function instance.
@github-actions github-actions bot added the PIP label Sep 18, 2023
@streamnativebot streamnativebot deleted the branch-3.2.0-SNAPSHOT branch September 25, 2023 21:05
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.