-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Null values being replaced with default #716
Comments
the feature is available from confluent-schema-registry release 7.3 are you using this version of the ser/deser lib in your kafka-connect ? check the doc -> https://docs.confluent.io/platform/current/schema-registry/connect.html#null-values-replaced-with-default-values |
@raphaelauv Yes I'm using version 7.6.0. It works on the producer side (e.g. with Debezium) but the issue is with the Confluent Kafka Connect sinks. I have a PR here to address the issue. |
"value.converter.ignore.default.for.nullables": "true" work with FROM confluentinc/cp-kafka-connect-base:7.6.0
RUN (echo 1 && yes) |confluent-hub install confluentinc/kafka-connect-s3:10.5.0 and the final file drop on s3 contain {"currency":null,"contry":"UNKNOW","_kafka_partition":0,"_kafka_offset":9,"_kafka_timestamp":1707851123401} |
I'm not using the Confluent Kafka Connect image to begin with, but I can give it a try when I have some time. More details on my setup -- I'm working with the Strimzi base image (
I do have |
Check the version of the schema-registry libs jar in that container image |
Description: Using the JsonFormat to write "from" debezium to kafka and then using the s3sinkconnector to read from kafka and save to s3, causes null values to be stored always with their default values. Therefore adding a new config property (for backwards compatibility) to allow the value transformer inside the s3sinkconnector to be configured correctly. Tests for the configuration and and integration have been added as well. This addresses confluentinc#716, but for json, instead of avro
Description: Using the JsonFormat to write "from" debezium to kafka and then using the s3sinkconnector to read from kafka and save to s3, causes null values to be stored always with their default values. Therefore adding a new config property (for backwards compatibility) to allow the value transformer inside the s3sinkconnector to be configured correctly. Tests for the configuration and and integration have been added as well. This addresses confluentinc#716, but for json, instead of avro
Hello, I'm using Debezium to extract MySQL data into Kafka in Avro format using the Confluent Avro converter. I'm then using the Confluent S3 sink to get this data into S3 as Avro files. However I'm running into an issue on the Kafka --> S3 side where my null values are being replaced with the MySQL default, even with
value.converter.ignore.default.for.nullables=true
. More details on setup below:Here's what my S3 sink settings look like
Here's what my schema looks like:
And here's what my message looks like in Kafka:
And when I try to read the Avro file produced by the S3 connector via Python, this is what I'm seeing
Notice how the value for the
test_str
key is the default value (alsotest_str
) instead ofNone
or null.In part of the S3 connector logs, I do see
ignore.default.for.nullables = false
, so is this setting perhaps not taking?The text was updated successfully, but these errors were encountered: