-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement support for max_event_buffer_size
: the maximum number of bytes from the final events payload sent
#35299
Comments
Pinging code owners: See Adding Labels via Comments if you do not have permissions to add labels yourself. |
The part that was most surprising here was that the documentation led us to believe that the compression was going to be just before being sent over the wire. Thus our expectation was that that data buffer would be capped at the From README.md:
|
The If you want to configure the size of the uncompressed payload, we will need more work to support that. |
Yes, this is essentially what we want. I wouldn't mind working on a PR for this. Would you want to add another configuration field for this use case or change the existing behavior of the |
I think that'd be something such as "payload_size". Note we also have a "max_event_size" configuration key. Those settings might play on each other. Do you want to have different sizes per signal, such as a key for log, metrics and traces? Or just one? |
We are currently only using this exporter for logs, so one setting for all three would be fine. Im thinking we could perhaps have a field like
|
Removing |
No, please do not use content_length in your field. Content-Length is a HTTP header used to represent the size of the payload in bytes over the wire. This is important for middleware like Nginx. This request for enhancement is not tied to this HTTP header. |
Ah, I apologize, I did not connect the dots that you mentioned
what do you think about a
This would be adopting the defaults/max values from the other |
max_content_length_*
is not consistent on the wire compressed vs. uncompressedmax_event_buffer_size
: the maximum number of bytes from the final events payload sent
We can use that setting to complete the approach ; initially, it should not have a default value so as to not introduce a breaking change. |
@atoulme after a bit more thinking, the best way to approach this could be to have an See this draft PR implementing it and corresponding description:
The aformentioned "fields set above" being What do you think about this proposal? It seems to be a bit cleaner and simpler. |
I think it would be ideal if the configuration for this exporter was changed such that "batch size" meant uncompressed size and was separated in meaning from the "content-length" HTTP header. See vector or fluentbit chunks for other examples. The use of compression should be simply a decision about the compression of the HTTP payload. |
Let me add some additional context to my prior comment as I do not want to give the impression I want to change the scope of this issue. I think this issue should proceed as it is in order to help speed along the meeting our needs. My comment was meant to express that a future breaking change that alters the configuration parameters might be prudent. I think semantically linking the batch size to the HTTP content-length makes it more difficult to reason about this problem. As an administrator I only care about the size of the payload uncompressed. Compression is beneficially to me in order to reduce cost over the wire. I believe these are distinct concerns and should not be conflated into a single configuration parameter. |
Component(s)
exporter/splunkhec
Describe the issue you're reporting
Upon using this exporter we have noticed that when one of the
max_content_length_
configurations is set (ie.max_content_length_logs
), the size of the payload on the wire may not be consistent depending on thedisable_compression
setting.When
true
, the write function will return an error if its over capacity, whereas the compression write function seems like it will write it if its compressed size is under themax_content_length
.So, if I understand it correctly, when
max_content_length_logs
is 1MB, the uncompressed writer will ensure its raw size is under 1MB, but the gzip writer will ensure its compressed size is under 1MB.Is this intentional? We are exporting to an endpoint that has size limits on the uncompressed data but, while we want to compress it, with variable compression rates its hard to determine exactly what content length limit we would want to configure.
The text was updated successfully, but these errors were encountered: