implemented streaming parsing multipart/form-data as turbo takes 3x memory of uploading file(s) size during parsing multipart/form-data request #367
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
isolated function which parse headers of multipart from parse_multipart_data() add kwargs.streaming_multipart_bytes in httpserver let user to parse multipart data in streaming and saved huge file (excess kwargs.large_body_bytes or 512 if no setting) to /tmp
the 3x memory of files as instance:
file about 100M so the multipart/form-data body is about 100M
function iostream.IOStream:_read_to_buffer()
self._read_buffer:append_right(ptr, sz)
would expand the buffer size to 100Mfunction iostream.IOStream:_consume(loc)
chunk = ffi.string(ptr + self._read_buffer_offset, loc)
converting/coping c string to lua string takes 100Mfunction httputil.parse_multipart_data(data, boundary)
argument[1] = data:sub(v1, b2)
slicing the huge file content as new string takes 100Mthe solution contains in the PR as below:
os.rename()
the file under /tmp to where they wantTesting result:
$ luajit examples/multipart.lua
response ok and the file uploaded/received are identical