Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix S3 bucket validation for inter-region storage #7896

Merged
merged 5 commits into from
Jun 20, 2024

Conversation

itaigilo
Copy link
Contributor

Closes #7846

Change Description

For the validation of inter-region storage (if it's not allowed):
When getting storage namespace region from S3, the code was trying to get the region (from the S3 API) for the whole namespace (for example, bucket-name/path-1/), instead of using only the bucket name (i.e bucket-name).

This fix is about getting the region only for the bucket name (which the S3 API can handle).

Testing Details

Tested this manually on a local server (configured with S3 of course).

@itaigilo itaigilo added bug Something isn't working include-changelog PR description should be included in next release changelog labels Jun 20, 2024
Copy link

E2E Test Results - DynamoDB Local - Local Block Adapter

13 passed

@itaigilo itaigilo requested review from a team and removed request for N-o-Z, guy-har and eladlachmi June 20, 2024 09:24
Copy link
Contributor

@arielshaqed arielshaqed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

I believe that we should avoid writing code to parse URLs as far as possible. Here I think we can just use a parser from the standard library.

@@ -870,11 +870,13 @@ func (a *Adapter) ResolveNamespace(storageNamespace, key string, identifierType
}

func (a *Adapter) GetRegion(ctx context.Context, storageNamespace string) (string, error) {
bucket, found := strings.CutPrefix(storageNamespace, fmt.Sprintf("%s://", block.BlockstoreTypeS3))
namespace, found := strings.CutPrefix(storageNamespace, fmt.Sprintf("%s://", block.BlockstoreTypeS3))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems somewhat fragile. Proposal: I think you can use "net/url".Parse and then the bucket is the Host field of the URL. Here's an example of how this can work.

Copy link
Member

@N-o-Z N-o-Z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nor a bucket or a namespace.
This is most likely a path in a bucket (e.g. <my-bucket>/<path>/<to>/<repo>/<namespace>
So both terms are innaccurate

pkg/block/s3/adapter.go Outdated Show resolved Hide resolved
Copy link
Member

@N-o-Z N-o-Z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved with one comment

Co-authored-by: N-o-Z <[email protected]>
Copy link
Contributor

@arielshaqed arielshaqed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@itaigilo itaigilo merged commit 2b936a4 into master Jun 20, 2024
35 checks passed
@itaigilo itaigilo deleted the fix/s3-bucket-validation branch June 20, 2024 11:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working include-changelog PR description should be included in next release changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Don't allow repositories with storage region different than the block storage region
3 participants