Skip to content

aws-samples/amazon-cloudfront-with-s3-multi-region-access-points

How to use Amazon S3 Multi-Region Access Points with Amazon CloudFront to build active-active latency-based applications

Many AWS customers are looking to optimize the performance of their applications which will deliver best possible experience to their end-users. Furthermore, architect the applications for disaster events, which is one of the biggest challenges they can face. In this post, you will learn how to use Amazon S3 Multi-Region Access Points with Amazon CloudFront to serve your web applications, static assets, or any objects stored in your Amazon Simple Storage Service (Amazon S3) in a Multi-Region Active-Active setup that provides latency-based routing so that content is delivered with the lowest network latency.

Solutions Architecture

Architecture

  1. Client makes a request that is expected to match the path pattern to the S3 Multi-Region Access Point origin.
  2. CloudFront matches the path pattern to the S3 Multi-Region Access Point at origin and invokes the associated origin request Lambda@Edge function.
  3. The Lambda function modifies the request object, which is passed in the event object, and signs the request using Signature Version 4A (SigV4A).
  4. The modified request is returned back to CloudFront.
  5. CloudFront, using the SigV4A authorization headers from the modified request object, makes the request to the S3 Multi-Region Access Point origin.
  6. S3 Multi-Region Access Point routes the request to the S3 bucket based on lowest network latency.

Deployment and implementation details

Prerequisites

Packaging Lambda function

From within project root folder, open the lambda folder.

cd ./lambda

Install Python dependencies for the Lambda function.

# If you see an error that pip isn't found, try with "pip3".
pip install \
  --platform manylinux1_x86_64 \
  --only-binary=:all: \
  -t package/ -r ./requirements.txt

Create deployment package using Lambda function lambda_function.py and package folder.

cd package
zip -r ../deployment-package.zip .
cd ..
zip -g deployment-package.zip lambda_function.py

You now should have deployment-package.zip file inside the lambda folder.

Run the next command to upload the deployment package to your Amazon S3 bucket from the second prerequisite. Replace <DEPLOYABLES-BUCKET-NAME-HERE> with the name of that bucket.

S3_BUCKET_DEPLOYABLES="<DEPLOYABLES-BUCKET-NAME-HERE>"
aws s3 cp ./deployment-package.zip s3://${S3_BUCKET_DEPLOYABLES}/lambdapackage/deployment-package.zip

Deploying CloudFormation stack

Since you’re still inside the lambda folder, first cd into the parent folder.

cd ..

Provide the Amazon S3 bucket names for variables S3_BUCKET_ONE_NAME and S3_BUCKET_TWO_NAME by replacing the placeholder <BUCKET-ONE-NAME-HERE> and <BUCKET-TWO-NAME-HERE> with your S3 bucket names. Deploy the CloudFormation stack by run the following command below.

Note: S3_BUCKET_ONE_NAME and S3_BUCKET_TWO_NAME are the two Amazon S3 buckets that already exists in your account as highlighted in prerequisites.

CF_STACK_NAME="cloudfront-s3-mrap-demo"
CF_TEMPLATE_FILE_PATH="cloudformation.template"
S3_BUCKET_ONE_NAME="<BUCKET-ONE-NAME-HERE>"
S3_BUCKET_TWO_NAME="<BUCKET-TWO-NAME-HERE>"

STACK_ID=$(aws cloudformation create-stack \
    --stack-name ${CF_STACK_NAME} \
    --template-body file://${CF_TEMPLATE_FILE_PATH} \
    --parameters ParameterKey=S3BucketOneName,ParameterValue=${S3_BUCKET_ONE_NAME} ParameterKey=S3BucketTwoName,ParameterValue=${S3_BUCKET_TWO_NAME} ParameterKey=S3BucketDeployables,ParameterValue=${S3_BUCKET_DEPLOYABLES} \
    --capabilities CAPABILITY_IAM \
    --query 'StackId' --output text --region us-east-1)

Optional: You can wait for stack creation. When the command completes, it's a signal stack creation is completed.

aws cloudformation wait stack-create-complete \
  --stack-name ${STACK_ID} --region us-east-1

Testing the deployment

Before you start testing the deployed solution, you first need to upload a file to each of the Amazon S3 buckets that are associated with the S3 Multi-Region Access Points.

Note: For testing purposes you are going to upload the file to each S3 bucket separately. For a production configuration I recommend to use replication rules inside the S3 Multi-Region Access Points to synchronize data among buckets. To learn more, see Configuring bucket replication for use with Multi-Region Access Points. Alternatively, you can use Amazon S3 replication inside the S3 bucket configuration directly. To learn more, see Amazon S3 Replication.

Upload file to Amazon S3 buckets

Upload an index.html file into the first S3 bucket.

CF_STACK_NAME="cloudfront-s3-mrap-demo"
S3_BUCKET_ONE_NAME=($(aws cloudformation describe-stacks \
      --stack-name ${CF_STACK_NAME} \
      --query "Stacks[0].Outputs[?OutputKey=='S3BucketOneName'].OutputValue" --output text --region us-east-1 \
      --output text))

BLOB="hello from s3 bucket ${S3_BUCKET_ONE_NAME}"
echo "${BLOB}" | aws s3 cp - s3://${S3_BUCKET_ONE_NAME}/index.html

Upload an index.html file into the second S3 bucket.

CF_STACK_NAME="cloudfront-s3-mrap-demo"
S3_BUCKET_TWO_NAME=($(aws cloudformation describe-stacks \
      --stack-name ${CF_STACK_NAME} \
      --query "Stacks[0].Outputs[?OutputKey=='S3BucketTwoName'].OutputValue" --output text --region us-east-1 \
      --output text))

BLOB="hello from s3 bucket ${S3_BUCKET_TWO_NAME}"
echo "${BLOB}" | aws s3 cp - s3://${S3_BUCKET_TWO_NAME}/index.html

Lookup CloudFront distribution DNS

Next, look up the CloudFront distribution DNS and export as environment variable.

CF_STACK_NAME="cloudfront-s3-mrap-demo"
CLOUD_FRONT_DNS=($(aws cloudformation describe-stacks \
      --stack-name ${CF_STACK_NAME} \
      --query "Stacks[0].Outputs[?OutputKey=='CloudFrontDns'].OutputValue" --output text \
      --output text))

export CLOUD_FRONT_DNS="${CLOUD_FRONT_DNS}"

Failover

For failover to work correctly, it’s important to Lambda function to distinguish if it’s a normal request to the origin, a request you want to modify, and a failover request, where you don’t want to modify the request. If it’s a failover request, it means the initial normal request didn’t succeed and there is no need to modify the request again, and the result most likely will be the same. For the failover case, you return the unmodified request object back to CloudFront and let the failover origin handle the request.

CloudFront custom headers

The way it’s achieved is by adding a custom header to your chosen failover origin. That header is expected to match with the value assigned to the failover_header variable. The check is made with the if statement to identify if it’s the failover request. If this is the case, request is return to CloudFront before the code that modifies the request object, therefore the request remain unmodified.

Dynamically derived

In case your failover origin is not yet another S3 Multi-Region Access Point, you can dynamically identify if it's the request to the failover region by looking at the domainName in the request object. S3 Multi-Region Access Point has a very distinct domain name, and you could pattern match if it's any other domain name but the S3 Multi-Region Access Point one. With this, you remove the need defining custom header in your failover origin.

Cleanup

After you’ve tested the solution, you can clean up all the created AWS resources by deleting the CloudFormation stack.

Note: CloudFormation might not be able to delete the Lambda function version because it is a replicated function that's association with the CloudFront distribution. In this case, please try to delete the Lambda function again in few hours later. For more info see Deleting Lambda@Edge functions and replicas.

CF_STACK_NAME="cloudfront-s3-mrap-demo"
aws cloudformation delete-stack --stack-name ${CF_STACK_NAME}

Delete the deployment-package.zip file from Amazon S3 bucket S3_BUCKET_ONE_NAME and the index.html from Amazon S3 buckets S3_BUCKET_ONE_NAME and S3_BUCKET_TWO_NAME.

Provide the Amazon S3 bucket names for variables S3_BUCKET_ONE_NAME and S3_BUCKET_TWO_NAME by replacing the placeholder <BUCKET-ONE-NAME-HERE> and <BUCKET-TWO-NAME-HERE> with your S3 bucket names.

S3_BUCKET_ONE_NAME="<BUCKET-ONE-NAME-HERE>"
S3_BUCKET_TWO_NAME="<BUCKET-TWO-NAME-HERE>"

aws s3 rm s3://${S3_BUCKET_ONE_NAME}/lambdapackage/deployment-package.zip
aws s3 rm s3://${S3_BUCKET_ONE_NAME}/index.html
aws s3 rm s3://${S3_BUCKET_TWO_NAME}/index.html

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.