-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
s3x - issue with loading files content from localstack #236
Comments
I've created an up-to-date PR of the tests with a couple of fixes in the assertions (comparing bytes with string for example). DiscoveryI've tried with different versions of localstack (2.0.x up until 2.3.x), all resulted in the same error :( After some research, I came across this issue in the localstack repository, pointing towards a problem when using When running Two quick changes make the tests pass:
builder -> builder
.bucket(path.bucketName())
+ .key(path.getKey()),
- .key(path.getKey())
- .range(range), This means that when using the When running
+ S3AsyncClientBuilder asyncClientBuilder = S3AsyncClient.builder();
if (!endpoint.isBlank()) {
asyncClientBuilder.endpointOverride(URI.create(configuration.getEndpointProtocol() + "://" + endpoint));
} This change causes the library to use the default ( When running CauseLooking at the different request logs, it seems that when using To find out
DiscussionRemoving the range from the request (Option 1) breaks the whole purpose of the ReadAhead class, so I wouldn't like to consider doing that in order to fix this issue. Using the default (non-crt) client (Option 2) seems a better alternative. S3ReadAheadByteChannel is for reading, not necessarily the whole Object, so there wouldn't be any improved throughput by using the crt client in this particular use case. @markjschreiber what are your thoughts? |
I'll test with a vanilla s3 bucket to see if the issue is specific to localstack. If needed we could drop our use of crt and use the default client. |
Confirmed that this works package software.amazon.nio.spi.examples;
import java.io.IOException;
import java.net.URI;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class ReadAllBytes {
public static void main(String[] args) throws IOException {
Path filePath = Paths.get(URI.create(args[0]));
final byte[] bytes = Files.readAllBytes(filePath);
// assumes this is a text file
final String data = new String(bytes, StandardCharsets.UTF_8);
System.out.println(data);
}
} |
Thanks for checking it @markjschreiber ! I'll make the changes so that |
* test(Files): Add integration tests for Files.copy / read* from issue #236 [skip ci] * chore(S3ClientProvider): reformat * rewrite: Extract `getRegionFromRegionName` and `endpointURI` methods * chore: Use `isBlank` instead of `trim().isEmpty()` * rewrite(S3ClientProvider): Extract method to configure and build client using S3BaseClientBuilder * fix(236): Use non-crt async client for `S3ReadAheadByteChannel` * chore: extract configureCrtClientForRegion method
Closing as #252 is now merged to main. |
We want to use the newly implemented s3x functionality to have integration tests with localstack. A test will fire up the localstack container, copy some files to the container and then have our software load the data from localstack via the provided URI. The copy and Files.exists works well, but I have troubles accessing the file content from localstack.
I forked your repo to reproduce the issue with a new integration test: https://github.com/thsandu/aws-java-nio-spi-for-s3-issues-with-s3x/blob/main/src/integrationTest/java/software/amazon/nio/spi/s3/FilesCopyTest.java
The Files.copy in the following code produces an exception
Same result I get also for the other ways to read the file content on the localstack, as you could see in the other two tests in the class.
The text was updated successfully, but these errors were encountered: