You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When we produce or consume messages from the backend API, a large number of messages will be passed between the sidecar and the backend API through HTTP requests. The HTTP protocol is not designed to pass a large body of data efficiently and a large body will take a lot of time for JSON parser to convert the string to a JSON array. Depending on the business logic in the backend API, sometimes, the communication between the sidecar and the backend might be the bottleneck. In order to increase the throughput, we can put multiple MBs of data in a text file and only pass the location of the file to the other party for processing or consumption.
As the message is loaded line by line from the file, it is faster than parsing the entire file to JSON first. We need to add this feature to address the initial load performance issue for one of the customers who is trying to load several decades of data from the mainframe to Kafka.
The text was updated successfully, but these errors were encountered:
When we produce or consume messages from the backend API, a large number of messages will be passed between the sidecar and the backend API through HTTP requests. The HTTP protocol is not designed to pass a large body of data efficiently and a large body will take a lot of time for JSON parser to convert the string to a JSON array. Depending on the business logic in the backend API, sometimes, the communication between the sidecar and the backend might be the bottleneck. In order to increase the throughput, we can put multiple MBs of data in a text file and only pass the location of the file to the other party for processing or consumption.
As the message is loaded line by line from the file, it is faster than parsing the entire file to JSON first. We need to add this feature to address the initial load performance issue for one of the customers who is trying to load several decades of data from the mainframe to Kafka.
The text was updated successfully, but these errors were encountered: