Purpose
AWS in General
Specific AWS Services | Basics | Tips | Gotchas |
---|---|---|---|
Security and IAM | đ | đ | đ |
S3 | đ | đ | đ |
EC2 | đ | đ | đ |
AMIs | đ | đ | đ |
Auto Scaling | đ | đ | đ |
EBS | đ | đ | đ |
EFS | đ | ||
Load Balancers | đ | đ | đ |
CLB (ELB) | đ | đ | đ |
ALB | đ | đ | đ |
Elastic IPs | đ | đ | đ |
Glacier | đ | đ | đ |
RDS | đ | đ | đ |
DynamoDB | đ | đ | đ |
ECS | đ | đ | |
Lambda | đ | đ | đ |
API Gateway | đ | đ | |
Route 53 | đ | đ | |
CloudFormation | đ | đ | đ |
VPCs, Network Security, and Security Groups | đ | đ | đ |
KMS | đ | đ | |
CloudFront | đ | đ | đ |
DirectConnect | đ | đ | |
Redshift | đ | đ | đ |
EMR | đ | đ |
Special Topics
Legal
Figures and Tables
- Figure: Tools and Services Market Landscape: A selection of third-party companies/products
- Figure: AWS Data Transfer Costs: Visual overview of data transfer costs
- Table: Service Matrix: How AWS services compare to alternatives
- Table: AWS Product Maturity and Releases: AWS product releases
- Table: Storage Durability, Availability, and Price: A quantitative comparison
A lot of information on AWS is already written. Most people learn AWS by reading a blog or a âgetting started guideâ and referring to the standard AWS references. Nonetheless, trustworthy and practical information and recommendations arenât easy to come by. AWSâs own documentation is a great but sprawling resource few have time to read fully, and it doesnât include anything but official facts, so omits experiences of engineers. The information in blogs or Stack Overflow is also not consistently up to date.
This guide is by and for engineers who use AWS. It aims to be a useful, living reference that consolidates links, tips, gotchas, and best practices. It arose from discussion and editing over beers by several engineers who have used AWS extensively.
Before using the guide, please read the license and disclaimer.
This is an early in-progress draft! Itâs our first attempt at assembling this information, so is far from comprehensive still, and likely to have omissions or errors.
Please help by joining the Slack channel to talk about AWS (anyone is welcome, even if you only have questions!), submitting a question, or contributing to the guide. This guide is open to contributions, so unlike a blog, it can keep improving. Like any open source effort, we combine efforts but also review to ensure high quality.
- Currently, this guide covers selected âcoreâ services, such as EC2, S3, Load Balancers, EBS, and IAM, and partial details and tips around other services. We expect it to expand.
- It is not a tutorial, but rather a collection of information you can read and return to. It is for both beginners and the experienced.
- The goal of this guide is to be:
- Brief: Keep it dense and use links
- Practical: Basic facts, concrete details, advice, gotchas, and other âfolk knowledgeâ
- Current: We can keep updating it, and anyone can contribute improvements
- Thoughtful: The goal is to be helpful rather than present dry facts. Thoughtful opinion with rationale is welcome. Suggestions, notes, and opinions based on real experience can be extremely valuable. (We believe this is both possible with a guide of this format, unlike in some other venues.)
- This guide is not sponsored by AWS or AWS-affiliated vendors. It is written by and for engineers who use AWS.
- đ Marks standard/official AWS pages and docs
- đš Important or often overlooked tip
- â Gotcha or warning (where risks or time or resource costs are significant)
- đ¸ Limitation or quirk (where itâs not quite so bad)
- đĽ Relatively new (and perhaps immature) services or features
- âą Performance discussions
- â Lock-in: Products or decisions that are likely to tie you to AWS in a new or significant way â that is, later moving to a non-AWS alternative would be costly in terms of engineering effort
- đŞ Alternative non-AWS options
- đ¸ Cost issues, discussion, and gotchas
- đ A mild warning attached to âfull solutionâ or opinionated frameworks that may take significant time to understand and/or might not fit your needs exactly; the opposite of a point solution (the cathedral is a nod to Raymondâs metaphor)
- đđđ Colors indicate basics, tips, and gotchas, respectively.
- đ§ Areas where correction or improvement are needed (possibly with link to an issue â do help!)
- AWS is the dominant public cloud computing provider.
- In general, âcloud computingâ can refer to one of three types of cloud: âpublic,â âprivate,â and âhybrid.â AWS is a public cloud provider, since anyone can use it. Private clouds are within a single (usually large) organization. Many companies use a hybrid of private and public clouds.
- The core features of AWS are infrastructure-as-a-service (IaaS) â that is, virtual machines and supporting infrastructure. Other cloud service models include platform-as-a-service (PaaS), which typically are more fully managed services that deploy customersâ applications, or software-as-a-service (SaaS), which are cloud-based applications. AWS does offer a few products that fit into these other models, too.
- In business terms, with infrastructure-as-a-service you have a variable cost model â it is OpEx, not CapEx (though some pre-purchased contracts are still CapEx).
- AWS revenue was about $5 billion as of 2015 (roughly a fifth of Amazon.comâs total revenue).
- Main reasons to use AWS:
- If your company is building systems or products that may need to scale
- and you have technical know-how
- and you want the most flexible tools
- and youâre not significantly tied into different infrastructure already
- and you donât have internal, regulatory, or compliance reasons you canât use a public cloud-based solution
- and youâre not on a Microsoft-first tech stack
- and you donât have a specific reason to use Google Cloud
- and you can afford, manage, or negotiate its somewhat higher costs
- ... then AWS is likely a good option for your company.
- Each of those reasons above might point to situations where other services are preferable. In practice, many, if not most, tech startups as well as a number of modern large companies can or already do benefit from using AWS. Many large enterprises are partly migrating internal infrastructure to Azure, Google Cloud, and AWS.
- Costs: Billing and cost management are such big topics that we have an entire section on this.
- đšEC2 vs. other services: Most users of AWS are most familiar with EC2, AWSâ flagship virtual server product, and possibly a few others like S3 and CLBs. But AWS products now extend far beyond basic IaaS, and often companies do not properly understand or appreciate all the many AWS services and how they can be applied, due to the sharply growing number of services, their novelty and complexity, branding confusion, and fear of âlock-in to proprietary AWS technology. Although a bit daunting, itâs important for technical decision-makers in companies to understand the breadth of the AWS services and make informed decisions. (We hope this guide will help.)
- đŞAWS vs. other cloud providers: While AWS is the dominant IaaS provider (31% market share in this 2016 estimate), there is significant competition and alternatives that are better suited to some companies:
- The most significant direct competitor is Google Cloud. It arrived later to market than AWS, but has vast resources and is now used widely by many companies, including a few large ones. It is gaining market share. Not all AWS services have similar or analogous services in Google Cloud. And vice versa: In particular Google offers some more advanced machine learning-based services like the Vision, Speech, and Natural Language APIs. Itâs not common to switch once youâre up and running, but it does happen: Spotify migrated from AWS to Google Cloud. There is more discussion on Quora about relative benefits.
- Microsoft Azure is the de facto choice for companies and teams that are focused on a Microsoft stack.
- In China, AWSâ footprint is relatively small. The market is dominated by Alibabaâs Aliyun.
- Companies at (very) large scale may want to reduce costs by managing their own infrastructure. For example, Dropbox migrated to their own infrastructure.
- Other cloud providers such as Digital Ocean offer similar services, sometimes with greater ease of use, more personalized support, or lower cost. However, none of these match the breadth of products, mind-share, and market domination AWS now enjoys.
- Traditional managed hosting providers such as Rackspace offer cloud solutions as well.
- đŞAWS vs. PaaS: If your goal is just to put up a single service that does something relatively simple, and youâre trying to minimize time managing operations engineering, consider a platform-as-a-service such as Heroku The AWS approach to PaaS, Elastic Beanstalk, is arguably more complex, especially for simple use cases.
- đŞAWS vs. web hosting: If your main goal is to host a website or blog, and you donât expect to be building an app or more complex service, you may wish consider one of the myriad of web hosting services.
- đŞAWS vs. managed hosting: Traditionally, many companies pay managed hosting providers to maintain physical servers for them, then build and deploy their software on top of the rented hardware. This makes sense for businesses who want direct control over hardware, due to legacy, performance, or special compliance constraints, but is usually considered old fashioned or unnecessary by many developer-centric startups and younger tech companies.
- Complexity: AWS will let you build and scale systems to the size of the largest companies, but the complexity of the services when used at scale requires significant depth of knowledge and experience. Even very simple use cases often require more knowledge to do ârightâ in AWS than in a simpler environment like Heroku or Digital Ocean. (This guide may help!)
- Geographic locations: AWS has data centers in over a dozen geographic locations, known as regions, in Europe, East Asia, North and South America, and now Australia and India. It also has many more edge locations globally for reduced latency of services like CloudFront.
- See the current list of regions and edge locations, including upcoming ones.
- If your infrastructure needs to be in close physical proximity to another service for latency or throughput reasons (for example, latency to an ad exchange), viability of AWS may depend on the location.
- âLock-in: As you use AWS, itâs important to be aware when you are depending on AWS services that do not have equivalents elsewhere.
- Lock-in may be completely fine for your company, or a significant risk. Itâs important from a business perspective to make this choice explicitly, and consider the cost, operational, business continuity, and competitive risks of being tied to AWS. AWS is such a dominant and reliable vendor, many companies are comfortable with using AWS to its full extent. Others can tell stories about the dangers of âcloud jailâ when costs spiral.
- Generally, the more AWS services you use, the more lock-in you have to AWS â that is, the more engineering resources (time and money) it will take to change to other providers in the future.
- Basic services like virtual servers and standard databases are usually easy to migrate to other providers or on premises. Others like load balancers and IAM are specific to AWS but have close equivalents from other providers. The key thing to consider is whether engineers are architecting systems around specific AWS services that are not open source or relatively interchangeable. For example, Lambda, API Gateway, Kinesis, Redshift, and DynamoDB do not have have substantially equivalent open source or commercial service equivalents, while EC2, RDS (MySQL or Postgres), EMR, and ElastiCache more or less do. (See more below, where these are noted with â.)
- Combining AWS and other cloud providers: Many customers combine AWS with other non-AWS services. For example, legacy systems or secure data might be in a managed hosting provider, while other systems are AWS. Or a company might only use S3 with another provider doing everything else. However small startups or projects starting fresh will typically stick to AWS or Google Cloud only.
- Hybrid cloud: In larger enterprises, it is common to have hybrid deployments encompassing private cloud or on-premises servers and AWS â or other enterprise cloud providers like IBM/Bluemix, Microsoft/Azure, NetApp, or EMC.
- Major customers: Who uses AWS and Google Cloud?
- AWSâs list of customers includes large numbers of mainstream online properties and major brands, such as Netflix, Pinterest, Spotify (moving to Google Cloud), Airbnb, Expedia, Yelp, Zynga, Comcast, Nokia, and Bristol-Myers Squibb.
- Google Cloudâs list of customers is large as well, and includes a few mainstream sites, such as Snapchat, Best Buy, Dominoâs, and Sony Music.
- AWS offers a lot of different services â about fifty at last count.
- Most customers use a few services heavily, a few services lightly, and the rest not at all. What services youâll use depends on your use cases. Choices differ substantially from company to company.
- Immature and unpopular services: Just because AWS has a service that sounds promising, it doesnât mean you should use it. Some services are very narrow in use case, not mature, are overly opinionated, or have limitations, so building your own solution may be better. We try to give a sense for this by breaking products into categories.
- Must-know infrastructure: Most typical small to medium-size users will focus on the following services first. If you manage use of AWS systems, you likely need to know at least a little about all of these. (Even if you donât use them, you should learn enough to make that choice intelligently.)
- IAM: User accounts and identities (you need to think about accounts early on!)
- EC2: Virtual servers and associated components, including:
- AMIs: Machine Images
- Load Balancers: CLBs and ALBs
- Autoscaling: Capacity scaling (adding and removing servers based on load)
- EBS: Network-attached disks
- Elastic IPs: Assigned IP addresses
- S3: Storage of files
- Route 53: DNS and domain registration
- VPC: Virtual networking, network security, and co-location; you automatically use
- CloudFront: CDN for hosting content
- CloudWatch: Alerts, paging, monitoring
- Managed services: Existing software solutions you could run on your own, but with managed deployment:
- RDS: Managed relational databases (managed MySQL, Postgres, and Amazonâs own Aurora database)
- EMR: Managed Hadoop
- Elasticsearch: Managed Elasticsearch
- ElastiCache: Managed Redis and Memcached
- Optional but important infrastructure: These are key and useful infrastructure components that are less widely known and used. You may have legitimate reasons to prefer alternatives, so evaluate with care you to be sure they fit your needs:
- âLambda: Running small, fully managed tasks âserverlessâ
- CloudTrail: AWS API logging and audit (often neglected but important)
- âđCloudFormation: Templatized configuration of collections of AWS resources
- đElastic Beanstalk: Fully managed (PaaS) deployment of packaged Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker applications
- đĽâEFS: Network filesystem
- âđECS: Docker container/cluster management (note Docker can also be used directly, without ECS)
- âECR: Hosted private Docker registry
- đĽConfig: AWS configuration inventory, history, change notifications
- Special-purpose infrastructure: These services are focused on specific use cases and should be evaluated if they apply to your situation. Many also are proprietary architectures, so tend to tie you to AWS.
- âDynamoDB: Low-latency NoSQL key-value store
- âGlacier: Slow and cheap alternative to S3
- âKinesis: Streaming (distributed log) service
- âSQS: Message queueing service
- âRedshift: Data warehouse
- đĽQuickSight: Business intelligence service
- SES: Send and receive e-mail for marketing or transactions
- âAPI Gateway: Proxy, manage, and secure API calls
- âIoT: Manage bidirectional communication over HTTP, WebSockets, and MQTT between AWS and clients (often but not necessarily âthingsâ like appliances or sensors)
- âWAF: Web firewall for CloudFront to deflect attacks
- âKMS: Store and manage encryption keys securely
- Inspector: Security audit
- Trusted Advisor: Automated tips on reducing cost or making improvements
- Compound services: These are similarly specific, but are full-blown services that tackle complex problems and may tie you in. Usefulness depends on your requirements. If you have large or significant need, you may have these already managed by in-house systems and engineering teams.
- Machine Learning: Machine learning model training and classification
- âđData Pipeline: Managed ETL service
- âđSWF: Managed background job workflow
- âđLumberyard: 3D game engine
- Mobile/app development:
- SNS: Manage app push notifications and other end-user notifications
- âđCognito: User authentication via Facebook, Twitter, etc.
- Device Farm: Cloud-based device testing
- Mobile Analytics: Analytics solution for app usage
- đMobile Hub: Comprehensive, managed mobile app framework
- Enterprise services: These are relevant if you have significant corporate cloud-based or hybrid needs. Many smaller companies and startups use other solutions, like Google Apps or Box. Larger companies may also have their own non-AWS IT solutions.
- AppStream: Windows apps in the cloud, with access from many devices
- Workspaces: Windows desktop in the cloud, with access from many devices
- WorkDocs (formerly Zocalo): Enterprise document sharing
- WorkMail: Enterprise managed e-mail and calendaring service
- Directory Service: Microsoft Active Directory in the cloud
- Direct Connect: Dedicated network connection between office or data center and AWS
- Storage Gateway: Bridge between on-premises IT and cloud storage
- Service Catalog: IT service approval and compliance
- Probably-don't-need-to-know services: Bottom line, our informal polling indicates these services are just not broadly used â and often for good reasons:
- Snowball: If you want to ship petabytes of data into or out of Amazon using a physical appliance, read on.
- CodeCommit: Git service. Youâre probably already using GitHub or your own solution (Stackshare has informal stats).
- đCodePipeline: Continuous integration. You likely have another solution already.
- đCodeDeploy: Deployment of code to EC2 servers. Again, you likely have another solution.
- đOpsWorks: Management of your deployments using Chef. While Chef is popular, it seems few people use OpsWorks, since it involves going in on a whole different code deployment framework.
- AWS in Plain English offers more friendly explanation of what all the other different services are.
There are now enough cloud and âbig dataâ enterprise companies and products that few can keep up with the market landscape. (See the Big Data Evolving Landscape â 2016 for one attempt at this.)
Weâve assembled a landscape of a few of the services. This is far from complete, but tries to emphasize services that are popular with AWS practitioners â services that specifically help with AWS, or a complementary, or tools almost anyone using AWS must learn.
đ§ Suggestions to improve this figure? Please file an issue.
- đ The AWS General Reference covers a bunch of common concepts that are relevant for multiple services.
- AWS allows deployments in regions, which are isolated geographic locations that help you reduce latency or offer additional redundancy (though typically availability zones are the first tool of choice for high availability).
- Each service has API endpoints for each region. Endpoints differ from service to service and not all services are available in each region, as listed in these tables.
- Amazon Resource Names (ARNs) are specially formatted identifiers for identifying resources. They start with 'arn:' and are used in many services, and in particular for IAM policies.
Many services within AWS can at least be compared with Google Cloud offerings or with internal Google services. And often times you could assemble the same thing yourself with open source software. This table is an effort at listing these rough correspondences. (Remember that this table is imperfect as in almost every case there are subtle differences of features!)
Service | AWS | Google Cloud | Google Internal | Microsoft Azure | Other providers | Open source âbuild your ownâ |
---|---|---|---|---|---|---|
Virtual server | EC2 | Compute Engine (GCE) | Virtual Machine | DigitalOcean | OpenStack | |
PaaS | Elastic Beanstalk | App Engine | App Engine | Web Apps | Heroku | Meteor, AppScale |
Serverless, microservices | Lambda, API Gateway | Functions | Function Apps | |||
Container, cluster manager | ECS | Container Engine, Kubernetes | Borg or Omega | Container Service | Kubernetes, Mesos, Aurora | |
File storage | S3 | Cloud Storage | GFS | Storage Account | Swift, HDFS | |
Block storage | EBS | Persistent Disk | Storage Account | NFS | ||
SQL datastore | RDS | Cloud SQL | SQL Database | MySQL, PostgreSQL | ||
Sharded RDBMS | Cloud SQL | F1, Spanner | Crate.io | |||
Bigtable | Cloud Bigtable | Bigtable | CockroachDB | |||
Key-value store, column store | DynamoDB | Cloud Datastore | Megastore | Tables, DocumentDB | Cassandra, CouchDB, RethinkDB, Redis | |
Memory cache | ElastiCache | App Engine Memcache | Redis Cache | Memcached, Redis | ||
Search | CloudSearch, Elasticsearch (managed) | Search | Algolia, QBox | Elasticsearch, Solr | ||
Data warehouse | Redshift | BigQuery | SQL Data Warehouse | Oracle, IBM, SAP, HP, many others | Greenplum | |
Business intelligence | QuickSight | Power BI | Tableau | |||
Lock manager | DynamoDB (weak) | Chubby | Lease blobs in Storage Account | ZooKeeper, Etcd, Consul | ||
Message broker | SQS, SNS, IoT | Pub/Sub | PubSub2 | Service Bus | RabbitMQ, Kafka, 0MQ | |
Streaming, distributed log | Kinesis | Dataflow | PubSub2 | Event Hubs | Kafka Streams, Apex, Flink, Spark Streaming, Storm | |
MapReduce | EMR | Dataproc | MapReduce | HDInsight, DataLake Analytics | Qubole | Hadoop |
Monitoring | CloudWatch | Monitoring | Borgmon | Monitor | Prometheus(?) | |
Metric management | Borgmon, TSDB | Application Insights | Graphite, InfluxDB, OpenTSDB, Grafana, Riemann, Prometheus | |||
CDN | CloudFront | CDN | Apache Traffic Server | |||
Load balancer | CLB/ALB | Load Balancing | GFE | Load Balancer, Application Gateway | nginx, HAProxy, Apache Traffic Server | |
DNS | Route53 | DNS | DNS | bind | ||
SES | Sendgrid, Mandrill, Postmark | |||||
Git hosting | CodeCommit | Visual Studio Team Services | GitHub, BitBucket | GitLab | ||
User authentication | Cognito | Azure Active Directory | oauth.io | |||
Mobile app analytics | Mobile Analytics | HockeyApp | Mixpanel | |||
Mobile app testing | Device Farm | Cloud Test Lab | Xamarin Test Cloud | BrowserStack, Sauce Labs, Testdroid |
đ§ Please help fill this table in.
Selected resources with more detail on this chart:
- Google internal: MapReduce, Bigtable, Spanner, F1 vs Spanner, Bigtable vs Megastore
Itâs important to know the maturity of each AWS product. Here is a mostly complete list of first release date, with links to the release notes. Most recently released services are first. Not all services are available in all regions; see this table.
Service | Original release | Availability |
---|---|---|
đĽDatabase Migration Service | 2016-03 | General |
đĽIoT | 2015-08 | General |
đĽWAF | 2015-10 | General |
đĽData Pipeline | 2015-10 | General |
đĽElasticsearch | 2015-10 | General |
đĽService Catalog | 2015-07 | General |
đĽCodePipeline | 2015-07 | General |
đĽCodeCommit | 2015-07 | General |
đĽAPI Gateway | 2015-07 | General |
đĽConfig | 2015-06 | General |
đĽEFS | 2015-05 | General |
đĽMachine Learning | 2015-04 | General |
Lambda | 2014-11 | General |
ECS | 2014-11 | General |
KMS | 2014-11 | General |
CodeDeploy | 2014-11 | General |
Kinesis | 2013-12 | General |
CloudTrail | 2013-11 | General |
AppStream | 2013-11 | Preview |
CloudHSM | 2013-03 | General |
Silk | 2013-03 | Obsolete? |
OpsWorks | 2013-02 | General |
Redshift | 2013-02 | General |
Elastic Transcoder | 2013-01 | General |
Glacier | 2012-08 | General |
CloudSearch | 2012-04 | General |
SWF | 2012-02 | General |
Storage Gateway | 2012-01 | General |
DynamoDB | 2012-01 | General |
DirectConnect | 2011-08 | General |
ElastiCache | 2011-08 | General |
CloudFormation | 2011-04 | General |
SES | 2011-01 | General |
Elastic Beanstalk | 2010-12 | General |
Route 53 | 2010-10 | General |
IAM | 2010-09 | General |
SNS | 2010-04 | General |
EMR | 2010-04 | General |
RDS | 2009-12 | General |
VPC | 2009-08 | General |
Snowball | 2009-05 | General |
CloudWatch | 2009-05 | General |
CloudFront | 2008-11 | General |
Fulfillment Web Service | 2008-03 | Obsolete? |
SimpleDB | 2007-12 | âNearly obsolete |
DevPay | 2007-12 | General |
Flexible Payments Service | 2007-08 | Retired |
EC2 | 2006-08 | General |
SQS | 2006-07 | General |
S3 | 2006-03 | General |
- Many applications have strict requirements around reliability, security, or data privacy. The AWS Compliance page has details about AWSâs certifications, which include PCI DSS Level 1, SOC 3, and ISO 9001.
- Security in the cloud is a complex topic, based on a shared responsibility model, where some elements of compliance are provided by AWS, and some are provided by your company.
- Several third-party vendors offer assistance with compliance, security, and auditing on AWS. If you have substantial needs in these areas, assistance is a good idea.
- From inside China, AWS services outside China are generally accessible, though there are at times breakages in service. There are also AWS services inside China.
- Forums: For many problems, itâs worth searching or asking for help in the discussion forums to see if itâs a known issue.
- Premium support: AWS offers several levels of premium support.
- Any small company should probably pay for the cheap âDeveloperâ support as itâs a flat $49/month and it lets you file support tickets with 12 to 24 hour turnaround time.
- The higher-level support services are quite expensive â and increase your bill by at least 10%. Many large and effective companies never pay for this level of support. They are usually more helpful for midsize or larger companies needing rapid turnaround on deeper or more perplexing problems.
- Keep in mind, a flexible architecture can reduce need for support. You shouldnât be relying on AWS to solve your problems often. For example, if you can easily re-provision a new server, it may not be urgent to solve a rare kernel-level issue unique to one EC2 instance. If your EBS volumes have recent snapshots, you may be able to restore a volume before support can rectify the issue with the old volume. If your services have an issue in one availability zone, you should in any case be able to rely on a redundant zone or migrate services to another zone.
- Larger customers also get access to AWS Enterprise support, with dedicated technical account managers (TAMs) and shorter response time SLAs.
- There is definitely some controversy about how useful the paid support is. The support staff donât always seem to have the information and authority to solve the problems that are brought to their attention. Often your ability to have a problem solved may depend on your relationship with your account rep.
- Account manager: If you are at significant levels of spend (thousands of US dollars plus per month), you may be assigned (or may wish to ask for) a dedicated account manager.
- These are a great resource, even if youâre not paying for premium support. Build a good relationship with them and make use of them, for questions, problems, and guidance.
- Assign a single point of contact on your companyâs side, to avoid confusing or overwhelming them.
- Contact: The main web contact point for AWS is here. Many technical requests can be made via these channels.
- Consulting and managed services: For more hands-on assistance, AWS has established relationships with many consulting partners and managed service partners. The big consultants wonât be cheap, but depending on your needs, may save you costs long term by helping you set up your architecture more effectively, or offering specific expertise, e.g. security. Managed service providers provide longer-term full-service management of cloud resources.
- AWS Professional Services: AWS provides consulting services alone or in combination with partners.
- đ¸Lots of resources in Amazon have limits on them. This is actually helpful, so you donât incur large costs accidentally. You have to request that quotas be increased by opening support tickets. Some limits are easy to raise, and some are not. (Some of these are noted in sections below.)
- đ¸AWS terms of service are extensive. Much is expected boilerplate, but it does contain important notes and restrictions on each service. In particular, there are restrictions against using many AWS services in safety-critical systems. (Those appreciative of legal humor may wish to review clause 57.10.)
- OpenStack is a private cloud alternative to AWS used by large companies that wish to avoid public cloud offerings.
- Certifications: AWS offers certifications for IT professionals who want to demonstrate their knowledge. They are:
- Certified Solutions Architect Associate and Professional, Certified Developer Associate
- Getting certified: If youâre interested in studying for and getting certifications, this practical overview tells you a lot of what you need to know. The official page is here and there is an FAQ.
- Do you need a certification? Especially in consulting companies or when working in key tech roles in large non-tech companies, certifications are important credentials. In others, including in many tech companies and startups, certifications are not common or considered necessary. (In fact, fairly or not, some Silicon Valley hiring managers and engineers see them as a ânegativeâ signal on a resume.)
A great challenge in using AWS to build complex systems (and with DevOps in general) is to manage infrastructure state effectively over time. In general, this boils down to three broad goals for the state of your infrastructure:
- Visibility: Do you know the state of your infrastructure (what services you are using, and exactly how)? Do you also know when you â and anyone on your team â make changes? Can you detect misconfigurations, problems, and incidents with your service?
- Automation: Can you reconfigure your infrastructure to reproduce past configurations or scale up existing ones without a lot of extra manual work, or requiring knowledge thatâs only in someoneâs head? Can you respond to incidents easily or automatically?
- Flexibility: Can you improve your configurations and scale up in new ways without significant effort? Can you add more complexity using the same tools? Do you share, review, and improve your configurations within your team?
Much of what we discuss below is really about how to improve the answers to these questions.
There are several approaches to deploying infrastructure with AWS, from the console to complex automation tools, to third-party services, all of which attempt to help achieve visibility, automation, and flexibility.
The first way most people experiment with AWS is via its web interface, the AWS Console. But using the Console is a highly manual process, and often works against automation or flexibility.
So if youâre not going to manage your AWS configurations manually, what should you do? Sadly, there are no simple, universal answers â each approach has pros and cons, and the approaches taken by different companies vary widely, and include directly using APIs (and building toolign on top yourself), using command-line tools, and using third-party tools and services.
- The AWS Console lets you control much (but not all) functionality of AWS via a web interface.
- Ideally, you should only use the AWS Console in a few specific situations:
- Itâs great for read-only usage. If youâre trying to understand the state of your system, logging in and browsing it is very helpful.
- It is also reasonably workable for very small systems and teams (for example, one engineer setting up one server that doesnât change often).
- It can be useful for operations youâre only going to do rarely, like less than once a month (for example, a one-time VPC setup you probably wonât revisit for a year). In this case using the console can be the simplest approach.
- âThink before you use the console: The AWS Console is convenient, but also the enemy of automation, reproducibility, and team communication. If youâre likely to be making the same change multiple times, avoid the console. Favor some sort of automation, or at least have a path toward automation, as discussed next. Not only does using the console preclude automation, which wastes time later, but it prevents documentation, clarity, and standardization around processes for yourself and your team.
- The aws command-line interface (CLI), used via the aws command, is the most basic way to save and automate AWS operations.
- Donât underestimate its power. It also has the advantage of being well-maintained â it covers a large proportion of all AWS services, and is up to date.
- In general, whenever you can, prefer the command line to the AWS Console for performing operations.
- đšEven in absence of fancier tools, you can write simple Bash scripts that invoke aws with specific arguments, and check these into Git. This is a primitive but effective way to document operations youâve performed. It improves automation, allows code review and sharing on a team, and gives others a starting point for future work.
- đšFor use that is primarily interactive, and not scripted, consider instead using the aws-shell tool from AWS. It is easier to use, with auto-completion and a colorful UI, but still works on the command line. If youâre using SAWS, a previous version of the program, you should migrate to aws-shell.
- SDKs for using AWS APIs are available in most major languages, with Go, iOS, Java, JavaScript, Python, Ruby, and PHP being most heavily used. AWS maintains a short list, but the awesome-aws list is the most comprehensive and current. Note support for C++ is still new.
- Retry logic: An important aspect to consider whenever using SDKs is error handling; under heavy use, a wide variety of failures, from programming errors to throttling to AWS-related outages or failures, can be expected to occur. SDKs typically implement exponential backoff to address this, but this may need to be understood and adjusted over time for some applications. For example, it is often helpful to alert on some error codes and not on others.
- âDonât use APIs directly. Although AWS documentation includes lots of API details, itâs better to use the SDKs for your preferred language to access APIs. SDKs are more mature, robust, and well-maintained than something youâd write yourself.
- A good way to automate operations in a custom way is Boto3, also known as the Amazon SDK for Python. Boto2, the previous version of this library, has been in wide use for years, but now there is a newer version with official support from Amazon, so prefer Boto3 for new projects.
- If you find yourself writing a Bash script with more than one or two CLI commands, youâre probably doing it wrong. Stop, and consider writing a Boto script instead. This has the advantages that you can:
- Check return codes easily so success of each step depends on success of past steps.
- Grab interesting bits of data from responses, like instance ids or DNS names.
- Add useful environment information (for example, tag your instances with git revisions, or inject the latest build identifier into your initialization script).
- đšTagging resources is an essential practice, especially as organizations grow, to better understand your resource usage. For example, you can through automation or convention add tags:
- For the org or developer that âownsâ that resource
- For the product that resource supports
- To label lifecycles, such as temporary resources or one that should be deprovisioned in the future
- To distinguish production-critical infrastructure (e.g. serving systems vs backend pipelines)
- To distinguish resources with special security or compliance requirements
This guide is about AWS, not DevOps or server configuration management in general. But before getting into AWS in detail, itâs worth noting that in addition to the configuration management for your AWS resources, there is the long-standing problem of configuration management for servers themselves.
- Herokuâs Twelve-Factor App principles list some established general best practices for deploying applications.
- Pets vs cattle: Treat servers like cattle, not pets. That is, design systems so infrastructure is disposable. It should be minimally worrisome if a server is unexpectedly destroyed.
- The concept of immutable infrastructure is an extension of this idea.
- Minimize application state on EC2 instances. In general, instances should be able to be killed or die unexpectedly with minimal impact. State that is in your application should quickly move to RDS, S3, DynamoDB, EFS, or other data stores not on that instance. EBS is also an option, though it generally should not be the bootable volume, and EBS will require manual or automated re-mounting.
- There is a large set of open source tools for managing configuration of server instances.
- These are generally not dependent on any particular cloud infrastructure, and work with any variety of Linux (or in many cases, a variety of operating systems).
- Leading configuration management tools are Puppet, Chef, Ansible, and Saltstack. These arenât the focus of this guide, but we may mention them as they relate to AWS.
- Docker and the containerization trend are changing the way many servers and services are deployed in general.
- Containers are designed as a way to package up your application(s) and all of their dependencies in a known way. When you build a container, you are including every library or binary your application needs, outside of the kernel. A big advantage of this approach is that itâs easy to test and validate a container locally without worrying about some difference between your computer and the servers you deploy on.
- A consequence of this is that you need fewer AMIs and boot scripts; for most deployments, the only boot script you need is a template that fetches an exported docker image and runs it.
- Companies that are embracing microservice architectures will often turn to container-based deployments.
- AWS launched ECS as a service to manage clusters via Docker in late 2014, though many people still deploy Docker directly themselves. See the ECS section for more details.
- Store and track instance metadata (such as instance id, availability zone, etc.) and deployment info (application build id, Git revision, etc.) in your logs or reports. The instance metadata service can help collect some of the AWS data youâll need.
- Use log management services: Be sure to set up a way to view and manage logs externally from servers.
- Cloud-based services such as Sumo Logic, Splunk Cloud, Scalyr, and Loggly are the easiest to set up and use (and also the most expensive, which may be a factor depending on how much log data you have).
- Major open source alternatives include Elasticsearch, Logstash, and Kibana (the âElastic Stackâ) and Graylog.
- If you can afford it (you have little data or lots of money) and donât have special needs, it makes sense to use hosted services whenever possible, since setting up your own scalable log processing systems is notoriously time consuming.
- Track and graph statistics: The AWS Console can show you simple graphs from CloudWatch, you typically will want to track and graph many kinds of statistics, from CloudWatch and your applications. Collect and export helpful metrics everywhere you can (and as long as volume is manageable enough you can afford it).
- NTP and accurate time: If you are not using Amazon Linux (which comes preconfigured), you should confirm your servers configure NTP correctly, to avoid insidious time drift (which can then cause all sorts of issues, from breaking API calls to misleading logs). This should be part of your automatic configuration for every server. If time has already drifted substantially (generally >1000 seconds), remember NTP wonât shift it back, so you may need to remediate manually (for example, like this on Ubuntu).
We cover security basics first, since configuring user accounts is something you usually have to do early on when setting up your system.
- đ IAM Homepage â User guide â FAQ
- The AWS Security Blog is one of the best sources of news and information on AWS security.
- IAM is the service you use to manage accounts and permissioning for AWS.
- Managing security and access control with AWS is critical, so every AWS administrator needs to use and understand IAM, at least at a basic level.
- IAM manages various kinds of authentication, for both users and for software services that may need to authenticate with AWS, including:
- Passwords to log into the console. These are a username and password for real users.
- Access keys, which you may use with command-line tools. These are two strings, one the âidâ, which is an upper-case alphabetic string of the form 'AXXXXXXXXXXXXXXXXXXX', and the other is the secret, which is a 40-character mixed-case base64-style string. These are often set up for services, not just users.
- Multi-factor authentication (MFA), which is the highly recommended practice of using a keychain fob or smartphone app as a second layer of protection for user authentication.
- IAM allows complex and fine-grained control of permissions, dividing users into groups, assigning permissions to roles, and so on. There is a policy language that can be used to customize security policies in a fine-grained way.
- đ¸The policy language has a complex and error-prone JSON syntax thatâs quite confusing, so unless you are an expert, it is wise to base yours off trusted examples or AWSâ own pre-defined managed policies.
- At the beginning, IAM policy may be very simple, but for large systems, it will grow in complexity, and need to be managed with care.
- đšMake sure one person (perhaps with a backup) in your organization is formally assigned ownership of managing IAM policies, make sure every administrator works with that person to have changes reviewed. This goes a long way to avoiding accidental and serious misconfigurations.
- It is best to give each user or service the minimum privileges needed to perform their duties. This is the principle of least privilege, one of the foundations of good security. Organize all IAM users and groups according to levels of access they need.
- đšUse IAM to create individual user accounts and use IAM accounts for all users from the beginning. This is slightly more work, but not that much.
- That way, you define different users, and groups with different levels of privilege (if you want, choose from Amazonâs default suggestions, of administrator, power user, etc.).
- This allows credential revocation, which is critical in some situations. If an employee leaves, or a key is compromised, you can revoke credentials with little effort.
- You can set up Active Directory federation to use organizational accounts in AWS.
- âEnable MFA on your account.
- You should always use MFA, and the sooner the better â enabling it when you already have many users is extra work.
- Unfortunately it canât be enforced in software, so an administrative policy has to be established.
- Most users can use the Google Authenticator app (on iOS or Android) to support two-factor authentication. For the root account, consider a hardware fob.
- âTurn on CloudTrail: One of the first things you should do is enable CloudTrail. Even if you are not a security hawk, there is little reason not to do this from the beginning, so you have data on what has been happening in your AWS account should you need that information. Youâll likely also want to set up a log management service to search and access these logs.
- đšUse IAM roles for EC2: Rather than assign IAM users to applications like services and then sharing the sensitive credentials, define and assign roles to EC2 instances and have applications retrieve credentials from the instance metadata.
- Assign IAM roles by realm â for example, to development, staging, and production. If youâre setting up a role, it should be tied to a specific realm so you have clean separation. This prevents, for example, a development instance from connecting to a production database.
- Best practices: AWSâ list of best practices is worth reading in full up front.
- Multiple accounts: Decide on whether you want to use multiple AWS accounts and research how to organize access across them. Factors to consider:
- Number of users
- Importance of isolation
- Resource Limits
- Permission granularity
- Security
- API Limits
- Regulatory issues
- Workload
- Size of infrastructure
- Cost of multi-account âoverheadâ: Internal AWS service management tools may need to be custom built or adapted.
- đšIt can help to use separate AWS accounts for independent parts of your infrastructure if you expect a high rate of AWS API calls, since AWS throttles calls at the AWS account level.
- Inspector is an automated security assessment service from AWS that helps identify common security risks. This allows validation that you adhere to certain security practices and may help with compliance.
- Trusted Advisor addresses a variety of best practices, but also offers some basic security checks around IAM usage, security group configurations, and MFA.
- Use KMS for managing keys: AWS offers KMS for securely managing encryption keys, which is usually a far better option than handling key security yourself. See below.
- AWS WAF is a web application firewall to help you protect your applications from common attack patterns.
- Security auditing:
- Security Monkey is an open source tool that is designed to assist with security audits.
- đšExport and audit security settings: You can audit security policies simply by exporting settings using AWS APIs, e.g. using a Boto script like SecConfig.py (from this 2013 talk) and then reviewing and monitoring changes manually or automatically.
- âDonât share user credentials: Itâs remarkably common for first-time AWS users to create one account and one set of credentials (access key or password), and then use them for a while, sharing among engineers and others within a company. This is easy. But donât do this. This is an insecure practice for many reasons, but in particular, if you do, you will have reduced ability to revoke credentials on a per-user or per-service basis (for example, if an employee leaves or a key is compromised), which can lead to serious complications.
- âInstance metadata throttling: The instance metadata service has rate limiting on API calls. If you deploy IAM roles widely (as you should!) and have lots of services, you may hit global account limits easily.
- One solution is to have code or scripts cache and reuse the credentials locally for a short period (say 2 minutes). For example, they can be put into the ~/.aws/credentials file but must also be refreshed automatically.
- But be careful not to cache credentials for too long, as they expire. (Note the other dynamic metadata also changes over time and should not be cached a long time, either.)
- đ¸Some IAM operations are slower than other API calls (many seconds), since AWS needs to propagate these globally across regions.
- âThe uptime of IAMâs API has historically been lower than that of the instance metadata API. Be wary of incorporating a dependency on IAMâs API into critical paths or subsystems â for example, if you validate a userâs IAM group membership when they log into an instance and arenât careful about precaching group membership or maintaining a back door, you might end up locking users out altogether when the API isnât available.
- đ Homepage â Developer guide â FAQ â Pricing
- S3 (Simple Storage Service) is AWSâ standard cloud storage service, offering file (opaque âblobâ) storage of arbitrary numbers of files of almost any size, from 0 to 5 TB. (Prior to 2011 the maximum size was 5 GB; larger sizes are now well supported via multipart support.)
- Items, or objects, are placed into named buckets stored with names which are usually called keys. The main content is the value.
- Objects are created, deleted, or updated. Large objects can be streamed, but you cannot access or modify parts of a value; you need to update the whole object.
- Every object also has metadata, which includes arbitrary key-value pairs, and is used in a way similar to HTTP headers. Some metadata is system-defined, some are significant when serving HTTP content from buckets or CloudFront, and you can also define arbitrary metadata for your own use.
- S3 URIs: Although often bucket and key names are provided in APIs individually, itâs also common practice to write an S3 location in the form 's3://bucket-name/path/to/key' (where the key here is 'path/to/key'). (Youâll also see 's3n://' and 's3a://' prefixes in Hadoop systems.)
- S3 vs Glacier, EBS, and EFS: AWS offers many storage services, and several besides S3 offer file-type abstractions. Glacier is for cheaper and infrequently accessed archival storage. EBS, unlike S3, allows random access to file contents via a traditional filesystem, but can only be attached to one EC2 instance at a time. EFS is a network filesystem many instances can connect to, but at higher cost. See the comparison table.
- For most practical purposes, you can consider S3 capacity unlimited, both in total size of files and number of objects.
- Bucket naming: Buckets are chosen from a global namespace (across all regions, even though S3 itself stores data in whichever S3 region you select), so youâll find many bucket names are already taken. Creating a bucket means taking ownership of the name until you delete it. Bucket names have a few restrictions on them.
- Bucket names can be used as part of the hostname when accessing the bucket or its contents, like
<bucket_name>.s3-us-east-1.amazonaws.com
, as long as the name is DNS compliant. - A common practice is to use the company name acronym or abbreviation to prefix (or suffix, if you prefer DNS-style hierarchy) all bucket names (but please, donât use a check on this as a security measure â this is highly insecure and easily circumvented!).
- đ¸Bucket names with '.' (periods) in them can cause certificate mismatches when used with SSL. Use '-' instead, since this then conforms with both SSL expectations and is DNS compliant.
- Bucket names can be used as part of the hostname when accessing the bucket or its contents, like
- The number of objects in a bucket is essentially unlimited. Customers routinely have millions of objects.
- Versioning: S3 has optional versioning support, so that all versions of objects are preserved on a bucket. This is mostly useful if you want an archive of changes or the ability to back out mistakes (it has none of the features of full version control systems like Git).
- Durability: Durability of S3 is extremely high, since internally it keeps several replicas. If you donât delete it by accident, you can count on S3 not losing your data. (AWS offers the seemingly improbable durability rate of 99.999999999%, but this is a mathematical calculation based on independent failure rates and levels of replication â not a true probability estimate. Either way, S3 has had a very good record of durability.) Note this is much higher durability than EBS! If durability is less important for your application, you can use S3 Reduced Redundancy Storage, which lowers the cost per GB, as well as the redundancy.
- đ¸S3 pricing depends on storage, requests, and transfer.
- For transfer, putting data into AWS is free, but youâll pay on the way out. Transfer from S3 to EC2 in the same region is free. Transfer to other regions or the Internet in general is not free.
- Deletes are free.
- S3 Reduced Redundancy and Infrequent Access: Most people use the Standard storage class in S3, but are other storage classes with lower cost:
- Reduced Redundancy Storage (RRS) has lower durability (99.99%, so just four nines). That is, thereâs a small chance youâll lose data. For some data sets where data has value in a statistical way (losing say half a percent of your objects isnât a big deal) this is a reasonable trade-off.
- Infrequent Access (IA) lets you get cheaper storage in exchange for more expensive access. This is great for archives like logs you already processed, but might want to look at later. To get an idea of the cost savings when using Infrequent Access (IA), you can use this S3 Infrequent Access Calculator.
- Glacier is a third alternative discussed as a separate product.
- See the comparison table.
- âąPerformance: Maximizing S3 performance means improving overall throughput in terms of bandwidth and number of operations per second.
- S3 is highly scalable, so in principle you can get arbitrarily high throughput. (A good example of this is S3DistCp.)
- But usually you are constrained by the pipe between the source and S3 and/or the level of concurrency of operations.
- Throughput is of course highest from within AWS to S3, and between EC2 instances and S3 buckets that are in the same region.
- Bandwidth from EC2 depends on instance type. See the âNetwork Performanceâ column at ec2instances.info.
- Throughput of many objects is extremely high when data is accessed in a distributed way, from many EC2 instances. Itâs possible to read or write objects from S3 from hundreds or thousands of instances at once.
- However, throughput is very limited when objects accessed sequentially from a single instance. Individual operations take many milliseconds, and bandwidth to and from instances is limited.
- Therefore, to perform large numbers of operations, itâs necessary to use multiple worker threads and connections on individual instances, and for larger jobs, multiple EC2 instances as well.
- Multi-part uploads: For large objects you want to take advantage of the multi-part uploading capabilities (starting with minimum chunk sizes of 5 MB).
- Large downloads: Also you can download chunks of a single large object in parallel by exploiting the HTTP GET range-header capability.
- đ¸List pagination: Listing contents happens at 1000 responses per request, so for buckets with many millions of objects listings will take time.
- âKey prefixes: In addition, latency on operations is highly dependent on prefix similarities among key names. If you have need for high volumes of operations, it is essential to consider naming schemes with more randomness early in the key name (first 6 or 8 characters) in order to avoid âhot spotsâ.
- We list this as a major gotcha since itâs often painful to do large-scale renames.
- đ¸Note that sadly, the advice about random key names goes against having a consistent layout with common prefixes to manage data lifecycles in an automated way.
- For data outside AWS, DirectConnect and S3 Transfer Acceleration can help. For S3 Transfer Acceleration, you pay about the equivalent of 1-2 months of storage for the transfer in either direction for using nearer endpoints.
- Command-line applications: There are a few ways to use S3 from the command line:
- Originally, s3cmd was the best tool for the job. Itâs still used heavily by many.
- The regular aws command-line interface now supports S3 well, and is useful for most situations.
- s4cmd is a replacement, with greater emphasis on performance via multi-threading, which is helpful for large files and large sets of files, and also offers Unix-like globbing support.
- GUI applications: You may prefer a GUI, or wish to support GUI access for less technical users. Some options:
- The AWS Console does offer a graphical way to use S3. Use caution telling non-technical people to use it, however, since without tight permissions, it offers access to many other AWS features.
- Transmit is a good option on OS X.
- S3 and CloudFront: S3 is tightly integrated with the CloudFront CDN. See the CloudFront section for more information, as well as S3 transfer acceleration.
- Static website hosting:
- S3 has a static website hosting option that is simply a setting that enables configurable HTTP index and error pages and HTTP redirect support to public content in S3. Itâs a simple way to host static assets or a fully static website.
- Consider using CloudFront in front of most or all assets:
- Like any CDN, CloudFront improves performance significantly.
- đ¸SSL is only supported on the built-in amazonaws.com domain for S3. S3 supports serving these sites through a custom domain, but not over SSL on a custom domain. However, CloudFront allows you to serve a custom domain over https. Amazon provides free SNI SSL/TLS certificates via Amazon Certificate Manager. SNI does not work on very outdated browsers/operating systems. Alternatively, you can provide your own certificate to use on CloudFront to support all browsers/operating systems.
- đ¸If you are including resources across domains, such as fonts inside CSS files, you may need to configure CORS for the bucket serving those resources.
- Since pretty much everything is moving to SSL nowadays, and you likely want control over the domain, you probably want to set up CloudFront with your own certificate in front of S3 (and to ignore the AWS example on this as it is non-SSL only).
- That said, if you do, youâll need to think through invalidation or updates on CloudFront. You may wish to include versions or hashes in filenames so invalidation is not necessary.
- Permissions:
- đ¸Itâs important to manage permissions sensibly on S3 if you have data sensitivities, as fixing this later can be a difficult task if you have a lot of assets and internal users.
- đšDo create new buckets if you have different data sensitivities, as this is much less error prone than complex permissions rules.
- đšIf data is for administrators only, like log data, put it in a bucket that only administrators can access.
- đ¸Limit individual user (or IAM role) access to S3 to the minimal required and catalog the âapprovedâ locations. Otherwise, S3 tends to become the dumping ground where people put data to random locations that are not cleaned up for years, costing you big bucks.
- Data lifecycles:
- When managing data, the understanding the lifecycle of the data is as important as understanding the data itself. When putting data into a bucket, think about its lifecycle â its end of life, not just its beginning.
- đšIn general, data with different expiration policies should be stored under separate prefixes at the top level. For example, some voluminous logs might need to be deleted automatically monthly, while other data is critical and should never be deleted. Having the former in a separate bucket or at least a separate folder is wise.
- đ¸Thinking about this up front will save you pain. Itâs very hard to clean up large collections of files created by many engineers with varying lifecycles and no coherent organization.
- Alternatively you can set a lifecycle policy to archive old data to Glacier. Be careful with archiving large numbers of small objects to Glacier, since it may actually cost more.
- There is also a storage class called Infrequent Access that has the same durability as Standard S3, but is discounted per GB. It is suitable for objects that are infrequently accessed.
- Data consistency: Understanding data consistency is critical for any use of S3 where there are multiple producers and consumers of data.
- Creation of individual objects in S3 is atomic. Youâll never upload a file and have another client see only half the file.
- Also, if you create a new object, youâll be able to read it instantly, which is called read-after-write consistency.
- Well, with the additional caveat that if you do a read on an object before it exists, then create it, you get eventual consistency (not read-after-write).
- If you overwrite or delete a object, youâre only guaranteed eventual consistency.
- đšNote that until 2015, 'us-standard' region had had a weaker eventual consistency model, and the other (newer) regions were read-after-write. This was finally corrected â but watch for many old blogs mentioning this!
- In practice, âeventual consistencyâ usually means within seconds, but expect rare cases of minutes or hours.
- S3 as a filesystem:
- In general S3âs APIs have inherent limitations that make S3 hard to use directly as a POSIX-style filesystem while still preserving S3âs own object format. For example, appending to a file requires rewriting, which cripples performance, and atomic rename of directories, mutual exclusion on opening files, and hardlinks are impossible.
- s3fs is a FUSE filesystem that goes ahead and tries anyway, but it has performance limitations and surprises for these reasons.
- Riofs (C) and Goofys (Go) are more recent efforts that attempt adopt a different data storage format to address those issues, and so are likely improvements on s3fs.
- S3QL (discussion) is a Python implementation that offers data de-duplication, snap-shotting, and encryption, but only one client at a time.
- ObjectiveFS (discussion) is a commercial solution that supports filesystem features and concurrent clients.
- If you are primarily using a VPC, consider setting up a VPC Endpoint for S3 in order to allow your VPC-hosted resources to easily access it without the need for extra network configuration or hops.
- Cross-region replication: S3 has a feature for replicating a bucket between one region and a another. Note that S3 is already highly replicated within one region, so usually this isnât necessary for durability, but it could be useful for compliance (geographically distributed data storage), lower latency, or as a strategy to reduce region-to-region bandwidth costs by mirroring heavily used data in a second region.
- IPv4 vs IPv6: For a long time S3 only supported IPv4 at the default endpoint
https://BUCKET.s3.amazonaws.com
. However, as of Aug 11, 2016 it now supports both IPv4 & IPv6! To use both, you have to enable dualstack either in your preferred API client or by directly using this url schemehttps://BUCKET.s3.dualstack.REGION.amazonaws.com
.
- đ¸For many years, there was a notorious 100-bucket limit per account, which could not be raised and caused many companies significant pain. As of 2015, you can request increases. You can ask to increase the limit, but it will still be capped (generally below ~1000 per account).
- đ¸Be careful not to make implicit assumptions about transactionality or sequencing of updates to objects. Never assume that if you modify a sequence of objects, the clients will see the same modifications in the same sequence, or if you upload a whole bunch of files, that they will all appear at once to all clients.
- đ¸S3 has an SLA with 99.9% uptime. If you use S3 heavily, youâll inevitably see occasional error accessing or storing data as disks or other infrastructure fail. Availability is usually restored in seconds or minutes. Although availability is not extremely high, as mentioned above, durability is excellent.
- đ¸After uploading, any change that you make to the object causes a full rewrite of the object, so avoid appending-like behavior with regular files.
- đ¸Eventual data consistency, as discussed above, can be surprising sometimes. If S3 at suffers from internal replication issues, an object may be visible from a subset of the machines, depending on which S3 endpoint they hit. Those usually resolve within seconds; however, weâve seen isolated cases when the issue lingered for 20-30 hours.
- đ¸MD5s and multi-part uploads: In S3, the ETag header in S3 is a hash on the object. And in many cases, it is the MD5 hash. However, this is not the case in general when you use multi-part uploads. One workaround is to compute MD5s yourself and put them in a custom header (such as is done by s4cmd).
- đ¸US Standard region: Most S3 endpoints match the region theyâre in, with the exception of the us-east-1 region, which is called 'us-standard' in S3 terminology. This region is also the only region that is replicated across coasts. As a result, latency varies more in this region than in others. You can minimize latency from us-east-1 by using s3-external-1.amazonaws.com.
As an illustration of comparative features and price, the table below gives S3 Standard, RRS, IA, in comparison with Glacier, EBS, and EFS, using Virginia region as of August 2016.
Durability (per year) | Availability âdesignedâ | Availability SLA | Storage (per TB per month) | GET or retrieve (per million) | Write or archive (per million) | |
---|---|---|---|---|---|---|
Glacier | Eleven 9s | Sloooow | â | $7 | $50 | $50 |
S3 IA | Eleven 9s | 99.9% | 99% | $12.50 | $1 | $10 |
S3 RRS | 99.99% | 99.99% | 99.9% | $24 | $0.40 | $5 |
S3 Standard | Eleven 9s | 99.99% | 99.9% | $30 | $0.40 | $5 |
EBS | 99.8% | Unstated | 99.95% | $25/$45/$100/$125+ (sc1/st1/gp2/io1) | ||
EFS | âHighâ | âHighâ | â | $300 |
Especially notable items are in boldface. Sources: S3 pricing, S3 SLA, S3 FAQ, RRS info, Glacier pricing, EBS availability and durability, EBS pricing, EFS pricing, EC2 SLA
- đ Homepage â Documentation â FAQ â Pricing (see also ec2instances.info)
- EC2 (Elastic Compute Cloud) is AWSâ offering of the most fundamental piece of cloud computing: A virtual private server. These âinstancesâ and can run most Linux, BSD, and Windows operating systems. Internally, they use Xen virtualization.
- The term âEC2â is sometimes used to refer to the servers themselves, but technically refers more broadly to a whole collection of supporting services, too, like load balancing (CLBs/ALBs), IP addresses (EIPs), bootable images (AMIs), security groups, and network drives (EBS) (which we discuss individually in this guide).
- Running EC2 is akin to running a set of physical servers, as long as you donât do automatic scaling or tooled cluster setup. If you just run a set of static instances, migrating to another VPS or dedicated server provider should not be too hard.
- đŞAlternatives to EC2: The direct alternatives are Google Cloud, Microsoft Azure, Rackspace, DigitalOcean and other VPS providers, some of which offer similar API for setting up and removing instances. (See the comparisons above.)
- Should you use Amazon Linux? AWS encourages use of their own Amazon Linux, which is evolved from from Red Hat Enterprise Linux (RHEL) and CentOS. Itâs used by many, but others are skeptical. Whatever you do, think this decision through carefully. Itâs true Amazon Linux is heavily tested and better supported in the unlikely event you have deeper issues with OS and virtualization on EC2. But in general, many companies do just fine using a standard, non-Amazon Linux distribution, such as Ubuntu or CentOS. Using a standard Linux distribution means you have an exactly replicable environment should you use another hosting provider instead of (or in addition to) AWS. Itâs also helpful if you wish to test deployments on local developer machines running the same standard Linux distribution (a practice thatâs getting more common with Docker, too, and not currently possible with Amazon Linux).
- EC2 costs: See the section on this.
- đšPicking regions: When you first set up, consider which regions you want to use first. Many people in North America just automatically set up in the us-east-1 (N. Virginia) region, which is the default, but itâs worth considering if this is best up front. For example, you might find it preferable to start in us-west-1 (N. California) or us-west-2 (Oregon) if youâre in California and latency matters. Some services are not available in all regions. Baseline costs also vary by region, up to 10-30% (generally lowest in us-east-1).
- Instance types: EC2 instances come in many types, corresponding to the capabilities of the virtual machine in CPU architecture and speed, RAM, disk sizes and types (SSD or magnetic), and network bandwidth.
- Selecting instance types is complex since there are so many types. Additionally, there are different generations, released over the years.
- đšUse the list at ec2instances.info to review costs and features. Amazonâs own list of instance types is hard to use, and doesnât list features and price together, which makes it doubly difficult.
- Prices vary a lot, so use ec2instances.info to determine the set of machines that meet your needs and ec2price.com to find the cheapest type in the region youâre working in. Depending on the timing and region, it might be much cheaper to rent an instance with more memory or CPU than the bare minimum.
- Dedicated instances and dedicated hosts are assigned hardware, instead of usual virtual instances. They more expensive than virtual instances but can be preferable for performance, compliance, or licensing reasons.
- 32 bit vs 64 bit: A few micro, small, and medium instances are still available to use as 32-bit architecture. Youâll be using 64-bit EC2 (âamd64â) instances nowadays, though smaller instances still support 32 bit (âi386â). Use 64 bit unless you have legacy constraints or other good reasons to use 32.
- HVM vs PV: There are two kinds of virtualization technology used by EC2, hardware virtual machine (HVM) and paravirtual (PV). Historically, PV was the usual type, but now HVM is becoming the standard. If you want to use the newest instance types, you must use HVM. See the instance type matrix for details.
- Operating system: To use EC2, youâll need to pick a base operating system. It can be Windows or Linux, such as Ubuntu or Amazon Linux. You do this with AMIs, which are covered in more detail in their own section below.
- Limits: You canât create arbitrary numbers of instances. Default limits on numbers of EC2 instances per account vary by instance type, as described in this list.
- âUse termination protection: For any instances that are important and long-lived (in particular, aren't part of auto-scaling), enable termination protection. This is an important line of defense against user mistakes, such as accidentally terminating many instances instead of just one due to human error.
- SSH key management:
- When you start an instance, you need to have at least one ssh key pair set up, to bootstrap, i.e., allow you to ssh in the first time.
- Aside from bootstrapping, you should manage keys yourself on the instances, assigning individual keys to individual users or services as appropriate.
- Avoid reusing the original boot keys except by administrators when creating new instances.
- How to avoid sharing keys; how to add individual ssh keys for individual users.
- GPU support: You can rent GPU-enabled instances on EC2. There are two instance types. Both sport an NVIDIA card (K520, 1536 CUDA cores and M2050, 448 CUDA cores).
- âNever use ssh passwords. Just donât do it; they are too insecure, and consequences of compromise too severe. Use keys instead. Read up on this and fully disable ssh password access to your ssh server by making sure 'PasswordAuthentication no' is in your /etc/ssh/sshd_config file. If youâre careful about managing ssh private keys everywhere they are stored, it is a major improvement on security over password-based authentication.
- đ¸For all newer instance types, when selecting the AMI to use, be sure you select the HVM AMI, or it just wonât work.
- âWhen creating an instance and using a new ssh key pair, make sure the ssh key permissions are correct.
- đ¸Sometimes certain EC2 instances can get scheduled for retirement by AWS due to âdetected degradation of the underlying hardware,â in which case you are given a couple of weeks to migrate to a new instance.
- đ¸Periodically you may find that your server or load balancer is receiving traffic for (presumably) a previous EC2 server that was running at the same IP address that you are handed out now (this may not matter, or it can be fixed by migrating to another new instance).
- âIf the EC2 API itself is a critical dependency of your infrastructure (e.g. for automated server replacement, custom scaling algorithms, etc.) and you are running at a large scale or making many EC2 API calls, make sure that you understand when they might fail (calls to it are rate limited and the limits are not published and subject to change) and code and test against that possibility.
- âMany newer EC2 instance types are EBS-only. Make sure to factor in EBS performance and costs when planning to use them.
- đ User guide
- AMIs (Amazon Machine Images) are immutable images that are used to launch preconfigured EC2 instances. They come in both public and private flavors. Access to public AMIs is either freely available (shared/community AMIs) or bought and sold in the AWS Marketplace.
- Many operating system vendors publish ready-to-use base AMIs. For Ubuntu, see the Ubuntu AMI Finder. Amazon of course has AMIs for Amazon Linux.
- AMIs are built independently based on how they will be deployed. You must select AMIs that match your deployment when using them or creating them:
- EBS or instance store
- PV or HVM virtualization types
- 32 bit (âi386â) vs 64 bit (âamd64â) architecture
- As discussed above, modern deployments will usually be with 64-bit EBS-backed HVM.
- You can create your own custom AMI by snapshotting the state of an EC2 instance that you have modified.
- AMIs backed by EBS storage have the necessary image data loaded into the EBS volume itself and donât require an extra pull from S3, which results in EBS-backed instances coming up much faster than instance storage-backed ones.
- AMIs are per region, so you must look up AMIs in your region, or copy your AMIs between regions with the AMI Copy feature.
- As with other AWS resources, itâs wise to use tags to version AMIs and manage their lifecycle.
- If you create your own AMIs, there is always some tension in choosing how much installation and configuration you want to âbakeâ into them.
- Baking less into your AMIs (for example, just a configuration management client that downloads, installs, and configures software on new EC2 instances when they are launched) allows you to minimize time spent automating AMI creation and managing the AMI lifecycle (you will likely be able to use fewer AMIs and will probably not need to update them as frequently), but results in longer waits before new instances are ready for use and results in a higher chance of launch-time installation or configuration failures.
- Baking more into your AMIs (for example, pre-installing but not fully configuring common software along with a configuration management client that loads configuration settings at launch time) results in a faster launch time and fewer opportunities for your software installation and configuration to break at instance launch time but increases the need for you to create and manage a robust AMI creation pipeline.
- Baking even more into your AMIs (for example, installing all required software as well and potentially also environment-specific configuration information) results in fast launch times and a much lower chance of instance launch-time failures but (without additional re-deployment and re-configuration considerations) can require time consuming AMI updates in order to update software or configuration as well as more complex AMI creation automation processes.
- Which option you favor depends on how quickly you need to scale up capacity, and size and maturity of your team and product.
- When instances boot fast, auto-scaled services require less spare capacity built in and can more quickly scale up in response to sudden increases in load. When setting up a service with autoscaling, consider baking more into your AMIs and backing them with the EBS storage option.
- As systems become larger, it common to have more complex AMI management, such as a multi-stage AMI creation process in which few (ideally one) common base AMIs are infrequently regenerated when components that are common to all deployed services are updated and then a more frequently run âservice-levelâ AMI generation process that includes installation and possibly configuration of application-specific software.
- More thinking on AMI creation strategies here.
- Use tools like Packer to simplify and automate AMI creation.
- By default, instances based on Amazon Linux AMIs are configured point to 'latest' versions of packages in Amazonâs package repository. This means that the package versions that get installed are not locked and it is possible for changes, including breaking ones, to appear when applying updates in the future. If you bake your AMIs with updates already applied, this is unlikely to cause problems in running services whose instances are based on those AMIs â breaks will appear at the earlier AMI-baking stage of your build process, and will need to be fixed or worked around before new AMIs can be generated. There is a âlock on launchâ feature that allows you to configure Amazon Linux instances to target the repository of a particular major version of the Amazon Linux AMI, reducing the likelihood that breaks caused by Amazon-initiated package version changes will occur at package install time but at the cost of not having updated packages get automatically installed by future update runs. Pairing use of the âlock on launchâ feature with a process to advance the Amazon Linux AMI at your discretion can give you tighter control over update behaviors and timings.
- đ Homepage â User guide â FAQ â Pricing at no additional charge
- Auto Scaling Groups (ASGs) are used to control the number of instances in a service, reducing manual effort to provision or deprovision EC2 instances.
- They can be configured, through Scaling Policies,â to automatically increase or decrease instance counts based on metrics like CPU utilization, or based on a schedule.
- There are three common ways of using ASGs - dynamic (automatically adjust instance count based on metrics for things like CPU utilization), static (maintain a specific instance count at all times), scheduled (maintain different instance counts at different times of day or on days of the week).
- đ¸ASGs have no additional charge themselves; you pay for underlying EC2 and CloudWatch services.
- đ¸ Better matching your cluster size to your current resource requirements through use of ASGs can result in significant cost savings for many types of workloads.
- Pairing ASGs with CLBs is a common pattern used to deal with changes in the amount of traffic a service receives.
- Dynamic Auto Scaling is easiest to use with stateless, horizontally scalable services.
- Even if you are not using ASGs to dynamically increase or decrease instance counts, you should seriously consider maintaining all instances inside of ASGs â given a target instance count, the ASG will work to ensure that number of instances running is equal to that target, replacing instances for you if they die or are marked as being unhealthy. This results in consistent capacity and better stability for your service.
- Autoscalers can be configured to terminate instances that a CLB or ALB has marked as being unhealthy.
- By default, ASGs will kill instances that the EC2 instance manager considers to be unresponsive. It is possible for instances whose CPU is completely saturated for minutes at a time to appear to be unresponsive, causing an ASG with the default 'ReplaceUnhealthy' setting turned on to replace them. When instances that are managed by ASGs are expected to consistently run with very high CPU, consider deactivating this setting. If you do so, however, detecting and killing unhealthy nodes will become your responsibility.
- đ Homepage â User guide â FAQ â Pricing
- EBS (Elastic Block Store) provides block level storage. That is, it offers storage volumes that can be attached as filesystems, like traditional network drives.
- EBS volumes can only be attached to one EC2 instance at a time. In contrast, EFS can be shared but has a much higher price point (a comparison).
- âąRAID: Use RAID drives for increased performance.
- âąA worthy read is AWSâ post on EBS IO characteristics as well as their performance tips.
- âąOne can provision IOPS (that is, pay for a specific level of I/O operations per second) to ensure a particular level of performance for a disk.
- âąA single EBS volume allows 10k IOPS max. To get the maximum performance out of an EBS volume, it has to be of a maximum size and attached to an EBS-optimized EC2 instance.
- A standard block size for an EBS volume is 16kb.
- âEBS durability is reasonably good for a regular hardware drive (annual failure rate of between 0.1% - 0.2%). On the other hand, that is very poor if you donât have backups! By contrast, S3 durability is extremely high. If you care about your data, back it up to S3 with snapshots.
- đ¸EBS has an SLA with 99.95% uptime. See notes on high availability below.
- âEBS volumes have a volume type indicating the physical storage type. The types called âstandardâ (st1 or sc1) are actually old spinning-platter disks, which deliver only hundreds of IOPS â not what you want unless youâre really trying to cut costs. Modern SSD-based gp2 or io1 are typically the options you want.
- đ Homepage â User guide â FAQ â Pricing
- đĽEFS is Amazonâs new (general release 2016) network filesystem.
- It is similar to EBS in that it is a network-attached drive, but it differs in important ways:
- EFS can be attached to many instances (up to thousands), while EBS can only be attached to one drive. It does this via NFSv4.
- EFS can offer higher throughput (multiple gigabytes per second) and better durability and availability than EBS (see the comparison table), but with higher latency.
- EFS cannot be used as a boot volume or in certain other ways EBS can.
- EFS costs much more than EBS (up to three times as much).
đ§ Please help expand this incomplete section.
- AWS has 2 load balancing products - âClassic Load Balancersâ (CLBs) and âApplication Load Balancersâ (ALBs).
- Before the introduction of ALBs, âClassic Load Balancersâ were known as âElastic Load Balancersâ (ELBs), so older documentation, tooling, and blog posts may still reference âELBsâ.
- CLBs have been around since 2009 while ALBs are a recent (2016) addition to AWS.
- CLBs support TCP and HTTP load balancing while ALBs support HTTP load balancing only.
- Both can optionally handle termination for a single SSL certificate.
- Both can optionally perform active health checks of instances and remove them from the destination pool if they become unhealthy.
- CLBs don't support complex / rule-based routing, while ALBs support a (currently small) set of rule-based routing features.
- CLBs can only forward traffic to a single globally configured port on destination instances, while ALBs can forward to ports that are configured on a per-instance basis, better supporting routing to services on shared clusters with dynamic port assignment (like ECS or Mesos).
- CLBs are supported in EC2 Classic as well as in VPCs while ALBs are supported in VPCs only.
- If you donât have opinions on your load balancing up front, and donât have complex load balancing needs like application-specific routing of requests, itâs reasonable just to use an CLB or ALB for load balancing instead.
- Even if you donât want to think about load balancing at all, because your architecture is so simple (say, just one server), put a load balancer in front of it anyway. This gives you more flexibility when upgrading, since you wonât have to change any DNS settings that will be slow to propagate, and also it lets you do a few things like terminate SSL more easily.
- CLBs and ALBs have many IPs: Internally, an AWS load balancer is simply a collection of individual software load balancers hosted within EC2, with DNS load balancing traffic among them. The pool can contain many IPs, at least one per availability zone, and depending on traffic levels. They also support SSL termination, which is very convenient.
- Scaling: CLBs and ALBs can scale to very high throughput, but scaling up is not instantaneous. If youâre expecting to be hit with a lot of traffic suddenly, it can make sense to load test them so they scale up in advance. You can also contact Amazon and have them âpre-warmâ the load balancer.
- Client IPs: In general, if servers want to know true client IP addresses, load balancers must forward this information somehow. CLBs add the standard X-Forwarded-For header. When using an CLB as an HTTP load balancer, itâs possible to get the clientâs IP address from this.
- Using load balancers when deploying: One common pattern is to swap instances in the load balancer after spinning up a new stack with your latest version, keep old stack running for one or two hours, and either flip back to old stack in case of problems or tear down it down.
- âCLBs and ALBs have no fixed external IP that all clients see. For most consumer apps this doesnât matter, but enterprise customers of yours may want this. IPs will be different for each user, and will vary unpredictably for a single client over time (within the standard EC2 IP ranges). And similarly, never resolve an CLB name to an IP and put it as the value of an A record â it will work for a while, then break!
- âSome web clients or reverse proxies cache DNS lookups for a long time, which is problematic for CLBs and ALBs, since they change their IPs. This means after a few minutes, hours, or days, your client will stop working, unless you disable DNS caching. Watch out for Javaâs settings and be sure to adjust them properly. Another example is nginx as a reverse proxy, which resolves backends only at start-up.
- âItâs not unheard of for IPs to be recycled between customers without a long cool-off period. So as a client, if you cache an IP and are not using SSL (to verify the server), you might get not just errors, but responses from completely different services or companies!
- đ¸As an operator of a service behind an CLB or ALB, the latter phenomenon means you can also see puzzling or erroneous requests by clients of other companies. This is most common with clients using back-end APIs (since web browsers typically cache for a limited period).
- âCLBs and ALBs take time to scale up, it does not handle sudden spikes in traffic well. Therefore, if you anticipate a spike, you need to âpre-warmâ the load balancer by gradually sending an increasing amount of traffic.
- âTune your healthchecks carefully â if you are too aggressive about deciding when to remove an instance and conservative about adding it back into the pool, the service that your load balancer is fronting may become inaccessible for seconds or minutes at a time. Be extra careful about this when an autoscaler is configured to terminate instances that are marked as being unhealthy by a managed load balancer.
- đ Homepage â User guide â FAQ â Pricing
- Classic Load Balancers, formerly known as Elastic Load Balancers, are HTTP and TCP load balancers that are managed and scaled for you by Amazon.
- Best practices: This article is a must-read if you use CLBs heavily, and has a lot more detail.
- In general, CLBs are not as âsmartâ as some load balancers, and donât have fancy features or fine-grained control a traditional hardware load balancer would offer. For most common cases involving sessionless apps or cookie-based sessions over HTTP, or SSL termination, they work well.
- Complex rules for directing traffic are not supported. For example, you canât direct traffic based on a regular expression in the URL, like HAProxy offers.
- Apex DNS names: Once upon a time, you couldnât assign an CLB to an apex DNS record (i.e. example.com instead of foo.example.com) because it needed to be an A record instead of a CNAME. This is now possible with a Route 53 alias record directly pointing to the load balancer.
- đ¸CLBs use HTTP keep-alives on the internal side. This can cause an unexpected side effect: Requests from different clients, each in their own TCP connection on the external side, can end up on the same TCP connection on the internal side. Never assume that multiple requests on the same TCP connection are from the same client!
- đ Homepage â User guide â FAQ â Pricing
- đĽWebsockets and HTTP/2 are now supported.
- Prior to the Application Load Balancer, you were advised to use TCP instead of HTTP as the protocol to make it work (as described here) and use the obscure but useful Proxy Protocol (more on this) to pass client IPs over a TCP load balancer.
- Use ALBs to route to services that are hosted on shared clusters with dynamic port assignment (like ECS or Mesos).
- ALBs support HTTP path-based routing (send HTTP requests for â/api/â -> {target-group-1}, â/blog/â -> {target group 2}).
- ALBs support HTTP routing but not port-based TCP routing.
- ALBs do not (yet) support routing based on HTTP âHostâ header or HTTP verb.
- Instances in the ALB's target groups have to either have a single, fixed healthcheck port (âEC2 instanceâ-level healthcheck) or the healthcheck port for a target has to be the same as its application port (âApplication instanceâ-level healthcheck) - you can't configure a per-target healthcheck port that is different than the application port.
- ALBs are VPC-only (they are not available in EC2 Classic)
- In a target group, if there is no healthy target, all requests are routed to all targets. For example, if you point a listener at a target group containing a single service that has a long initialization phase (during which the health checks would fail), requests will reach the service while it is still starting up.
- đ Documentation â FAQ â Pricing
- Elastic IPs are static IP addresses you can rent from AWS to assign to EC2 instances.
- đšPrefer load balancers to elastic IPs: For single-instance deployments, you could just assign elastic IP to an instance, give that IP a DNS name, and consider that your deployment. Most of the time, you should provision a load balancer instead:
- Itâs easy to add and remove instances from load balancers. It's also quicker to add or remove instances from a load balancer than to reassign an elastic IP.
- Itâs more convenient to point DNS records to load balancers, instead of pointing them to specific IPs you manage manually. They can also be Route 53 aliases, which are easier to change and manage.
- But in some situations, you do need to manage and fix IP addresses of EC2 instances, for example if a customer needs a fixed IP. These situations require elastic IPs.
- Elastic IPs are limited to 5 per account. Itâs possible to request more.
- If an Elastic IP is not attached to an active resource there is a small hourly fee.
- Elastic IPs are no extra charge as long as youâre using them. They have a (small) cost when not in use, which is a mechanism to prevent people from squatting on excessive numbers of IP addresses.
- đ¸There is officially no way to allocate a contiguous block of IP addresses, something you may desire when giving IPs to external users. Though when allocating at once, you may get lucky and have some be part of the same CIDR block.
- đ Homepage â Developer guide â FAQ â Pricing
- Glacier is a lower-cost alternative to S3 when data is infrequently accessed, such as for archival purposes.
- Itâs only useful for data that is rarely accessed. It generally takes 3-5 hours to fulfill a retrieval request.
- AWS has not officially revealed the storage media used by Glacier; it may be low-spin hard drives or even tapes.
- You can physically ship your data to Amazon to put on Glacier on a USB or eSATA HDD.
- đ¸Getting files off Glacier is glacially slow (typically 3-5 hours or more).
- đ¸Due to a fixed overhead per file (you pay per PUT or GET operation), uploading and downloading many small files on/to Glacier might be very expensive. There is also a 32k storage overhead per file. Hence itâs a good idea is to archive files before upload.
- đ¸Glacierâs pricing policy is reportedly pretty complicated: âGlacier data retrievals are priced based on the peak hourly retrieval capacity used within a calendar month.â Some more info can be found here and here.
- đ Homepage â User guide â FAQ â Pricing(see also ec2instances.info/rds/)
- RDS is a managed relational database service, allowing you to deploy and scale databases more easily. It supports Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB.
- If youâre looking for the managed convenience of RDS for MongoDB, this isnât offered by AWS directly, but you may wish to consider a provider such as mLab.
- MySQL RDS allows access to binary logs.
- đ¸MySQL vs MariaDB vs Aurora: If you prefer a MySQL-style database but are starting something new, you probably should consider Aurora and MariaDB as well. Aurora has increased availability and is the next-generation solution. That said, Aurora may not be as fast relative to MySQL as is sometimes reported, and is more complex to administer. MariaDB, the modern community fork of MySQL, likely now has the edge over MySQL for many purposes and is supported by RDS.
- đ¸Aurora: Aurora is based on MySQL 5.6. If you are planning to migrate to Aurora from an existing MySQL database, avoiding any MySQL features from 5.7 or later will ease the transition. The easiest migration path to Aurora is restoring a database snapshot from MySQL 5.6. The next easiest method is restoring a dump from a MySQL-compatible database such as MariaDB. If neither of those methods are options, Amazon offers a fee-based data migration service.
- âąRDS instances run on EBS volumes, and hence are constrained by the EBS performance.
- đ¸Verify what database features you need, as not everything you might want is available on RDS. For example, if you are using Postgres, check the list of supported features and extensions. If the features you need arenât supported by RDS, youâll have to deploy your database yourself.
- đ Homepage â Developer guide â FAQ â Pricing
- DynamoDB is a NoSQL database with focuses on speed, flexibility, and scalability.
- DynamoDB is priced on a combination of throughput and storage.
- â Unlike the technologies behind many other Amazon products, DynamoDB is a proprietary AWS product with no interface-compatible alternative available as an open source project. If you tightly couple your application to its API and featureset, it will take significant effort to replace.
- The most commonly used alternative to DynamoDB is Cassandra.
- There is a local version of DynamoDB provided for developer use.
- DynamoDB Streams provides an ordered stream of changes to a table. Use it to replicate, back up, or drive events off of data
- DynamoDB can be used as a simple locking service.
- đ¸ DynamoDB doesnât provide an easy way to bulk-load data (it is possible through Data Pipeline, and this has some unfortunate consequences. Since you need to use the regular service APIs to update existing or create new rows, it is common to temporarily turn up a destination tableâs write throughput to speed import. But when the tableâs write capacity is increased, DynamoDB may do an irreversible split of the partitions underlying the table, spreading the total table capacity evenly across the new generation of tables. Later, if the capacity is reduced, the capacity for each partition is also reduced but the total number of partitions is not, leaving less capacity for each partition. This leaves the table in a state where it much easier for hotspots to overwhelm individual partitions.
- It is important to make sure that DynamoDB resource limits are compatible with your dataset and workload. For example, the maximum size value that can be added to a DynamoDB table is 400 KB (larger items can be stored in S3 and a URL stored in DynamoDB).
- đ Homepage â Developer guide â FAQ â Pricing
- ECS (EC2 Container Service) is a relatively new service (launched end of 2014) that manages clusters of services deployed via Docker.
- See the Containers and AWS section for more context on containers.
- ECS is growing in adoption, especially for companies that embrace microservices.
- Deploying Docker directly in EC2 yourself is another common approach to using Docker on AWS. Using ECS is not required, and ECS does not (yet) seem to be the predominant way many companies are using Docker on AWS.
- Itâs also possible to use Elastic Beanstalk with Docker, which is reasonable if youâre already using Elastic Beanstalk.
- Using Docker may change the way your services are deployed within EC2 or Elastic Beanstalk, but it does not radically change how most other services are used.
- ECR (EC2 Container Registry) is Amazonâs managed Docker registry service. While simpler than running your own registry, it is missing some features that might be desired by some users:
- Doesnât support cross-region replication of images.
- If you want fast fleet-wide pulls of large images, youâll need to push your image into a region-local registry.
- Doesnât support custom domains / certificates.
- Doesnât support cross-region replication of images.
- A container's health is monitored via CLB or ALB. Those can also be used to address a containerized service. When using an ALB you do not need to handle port contention (i.e. services exposing the same port on the same host) since an ALBâs target groups can be associated with ECS-based services directly.
- This blog from Convox (and commentary) lists a number of common challenges with ECS as of early 2016.
- Kubernetes: Extensive container platform. Available as a hosted solution on Google Cloud (https://cloud.google.com/container-engine/) and AWS (https://tectonic.com/).
- Nomad: Orchestrator/Scheduler, tightly integrated in the Hashicorp stack (Consul, Vault, etc).
đ§ Please help expand this incomplete section.
- đ Homepage â Developer guide â FAQ â Pricing
- Lambda is a relatively new service (launched at end of 2014) that offers a different type of compute abstraction: A user-defined function that can perform a small operation, where AWS manages provisioning and scheduling how it is run.
- What does âserverlessâ mean? This idea of using Lambda for application logic has grown to be called serverless since you don't explicitly manage any server instances, as you would with EC2. This term is a bit confusing since the functions themselves do of course run on servers managed by AWS. Serverless, Inc. also uses this word for the name of their company and their own open source framework, but the term is usually meant more generally.
- The release of Lambda and API Gateway in 2015 triggered a startlingly rapid adoption in 2016, with many people writing about serverless architectures in which many applications traditionally solved by managing EC2 servers can be built without explicitly managing servers at all.
- Frameworks: Several frameworks for building and managing serverless deployment are emerging.
- The Awesome Serverless list gives a good set of examples of the relatively new set of tools and frameworks around Lambda.
- The Serverless framework is a leading new approach designed to help group and manage Lambda functions. Itâs approaching version 1 as of August 2016) and is popular among a small number of users.
- đŞOther clouds offer similar services with different names, including Google Cloud Functions, Azure Functions, and IBM OpenWhisk.
- đ¸Lambda is a new technology. As of mid 2016, only a few companies are using it for large-scale production applications.
- đ¸Managing lots of Lambda functions is a workflow challenge, and tooling to manage Lambda deployments is still immature.
- đ¸AWSâ official workflow around managing function versioning and aliases is painful.
đ§ Please help expand this incomplete section.
- đ Homepage â Developer guide â FAQ â Pricing
- API Gateway provides a scalable, secured front-end for service APIs, and can work with Lambda, Elastic Beanstalk, or regular EC2 services.
- It allows âserverlessâ deployment of applications built with Lambda.
- đ¸Switching over deployments after upgrades can be tricky. There are no built-in mechanisms to have a single domain name migrate from one API gateway to another one. So it may be necessary to build an additional layer in front (even another API Gateway) to allow smooth migration from one deployment to another.
- đ¸API Gateway only supports encrypted (https) endpoints, and does not support unencrypted HTTP. (This is probably a good thing.)
- đ¸API Gateway endpoints are public â there is no mechanism to build private endpoints, e.g. for internal use.
đ§ Please help expand this incomplete section.
- đ Homepage â Developer guide â FAQ â Pricing
- Route 53 is AWSâ DNS service.
- Historically, AWS was slow to penetrate the DNS market (as it is often driven by perceived reliability and long-term vendor relationships) but Route 53 has matured and is becoming the standard option for many companies. Route 53 is cheap by historic DNS standards, as it has a fairly large global network with geographic DNS and other formerly âpremiumâ features. Itâs convenient if you are already using AWS.
- âGenerally you donât get locked into a DNS provider for simple use cases, but increasingly become tied in once you use specific features like geographic routing or Route 53âs alias records.
- đŞMany alternative DNS providers exist, ranging from long-standing premium brands like UltraDNS and Dyn to less well known, more modestly priced brands like DNSMadeEasy. Most DNS experts will tell you that the market is opaque enough that reliability and performance donât really correlate well with price.
- âąRoute 53 is usually somewhere in the middle of the pack on performance tests, e.g. the SolveDNS reports.
- đšKnow about Route 53âs âaliasâ records:
- Route 53 supports all the standard DNS record types, but note that alias resource record sets are not standard part of DNS, but a specific Route 53 feature. (Itâs available from other DNS providers too, but each provider has a different name for it.)
- Aliases are like an internal name (a bit like a CNAME) that is resolved internally on the server side. For example, traditionally you could have a CNAME to the DNS name of a CLB or ALB, but itâs often better to make an alias to the same load balancer. The effect is the same, but in the latter case, externally, all a client sees is the target the record points to.
- Itâs often wise to use alias record as an alternative to CNAMEs, since they can be updated instantly with an API call, without worrying about DNS propagation.
- You can use them for CLBs/ALBs or any other resource where AWS supports it.
- Somewhat confusingly, you can have CNAME and A aliases, depending on the type of the target.
- Because aliases are extensions to regular DNS records, if exported, the output zone file will have additional non-standard âALIASâ lines in it.
- Take advantage of AWS Route 53 latency based routing. This means that your users around the globe are automatically directed to the nearest AWS region where you are running in terms of having the shortest latency.
- đ Homepage â Developer guide â FAQ â Pricing at no additional charge
- CloudFormation offers mechanisms to create and manage entire configurations of many types of AWS resources, using a JSON-based templating language.
- đ¸CloudFormation itself has no additional charge itself; you pay for the underlying resources.
- Hashicorpâs Terraform is a third-party alternative.
- Troposphere is a Python library that makes it much easier to create CloudFormation templates.
- đšUntil 2016, CloudFormation used only an awkward JSON format that makes both reading and debugging difficult. To use it effectively typically involved building additional tooling, including converting it to YAML, but now this is supported directly.
- đ¸CloudFormation is useful but complex and with a variety of pain points. Many companies find alternate solutions, and many companies use it, but only with significant additional tooling.
- đ¸CloudFormation can be very slow, especially for items like CloudFront distributions.
- đ¸Itâs hard to assemble good CloudFormation configurations from existing state. AWS does offer a trick to do this, but itâs very clumsy.
- đ¸Many users donât use CloudFormation at all because of its limitations, or because they find other solutions preferable. Often there are other ways to accomplish the same goals, such as local scripts (Boto, Bash, Ansible, etc.) you manage yourself that build infrastructure, or Docker-based solutions (Convox, etc.).
- đ Homepage â User guide â FAQ â Security groups â Pricing
- VPC (Virtual Private Cloud) is the virtualized networking layer of your AWS systems.
- Most AWS users should have a basic understanding of VPC concepts, but few need to get into all the details. VPC configurations can be trivial or extremely complex, depending on the extent of your network and security needs.
- All modern AWS accounts (those created after 2013-12-04) are âEC2-VPCâ accounts that support VPCs, and all instances will be in a default VPC. Older accounts may still be using âEC2-Classicâ mode. Some features donât work without VPCs, so you probably will want to migrate.
- âSecurity groups are your first line of defense for your servers. Be extremely restrictive of what ports are open to all incoming connections. In general, if you use CLBs, ALBs or other load balancing, the only ports that need to be open to incoming traffic would be port 22 and whatever port your application uses.
- Port hygiene: A good habit is to pick unique ports within an unusual range for each different kind of production service. For example, your web fronted might use 3010, your backend services 3020 and 3021, and your Postgres instances the usual 5432. Then make sure you have fine-grained security groups for each set of servers. This makes you disciplined about listing out your services, but also is more error-proof. For example, should you accidentally have an extra Apache server running on the default port 80 on a backend server, it will not be exposed.
- Migrating from Classic: For migrating from older EC2-Classic deployments to modern EC2-VPC setup, this article may be of help.
- For basic AWS use, one default VPC may be sufficient. But as you scale up, you should consider mapping out network topology more thoroughly. A good overview of best practices is here.
- Consider controlling access to your private AWS resources through a VPN.
- You get better visibility into and control of connection and connection attempts.
- You expose a smaller surface area for attack compared to exposing separate (potentially authenticated) services over the public internet.
- e.g. A bug in the YAML parser used by the Ruby on Rails admin site is much less serious when the admin site is only visible to the private network and accessed through VPN.
- Another common pattern (especially as deployments get larger, security or regulatory requirements get more stringent, or team sizes increase) is to provide a bastion host behind a VPN through which all SSH connections need to transit.
- đ¸Security groups are not shared across data centers, so if you have infrastructure in multiple data centers, you should make sure your configuration/deployment tools take that into account.
- âBe careful when choosing your VPC IP CIDR block: If you are going to need to make use of ClassicLink, make sure that your private IP range doesnât overlap with that of EC2 Classic.
- âIf you are going to peer VPCs, carefully consider the cost of of data transfer between VPCs, since for some workloads and integrations, this can be prohibitively expensive.
- đ Homepage â Developer guide â FAQ â Pricing
- KMS (Key Management Service) is secure service for storing keys, such encryption keys for EBS and S3.
- đšItâs very common for companies to manage keys completely via home-grown mechanisms, but itâs far preferable to use a service such as KMS from the beginning, as it encourages more secure design and improves policies and processes around managing keys.
- A good motivation and overview is in this AWS presentation.
- The cryptographic details are in this AWS whitepaper.
đ§ Please help expand this incomplete section.
- đ Homepage â Developer guide â FAQ â Pricing
- CloudFront is AWSâ content delivery network (CDN).
- Its primary use is improving latency for end users in to accessing cacheable content by hosting it at about 40 global edge locations.
- đŞCDNs are a highly fragmented market. CloudFront has grown to be a leader, but many alternatives that might better suit specific needs.
- đĽHTTP/2 is now supported! Clients must support TLS 1.2 and SNI.
- While the most common use is for users to browse and download content (GET or HEAD methods) requests, CloudFront also supports (since 2013) uploaded data (POST, PUT, DELETE, OPTIONS, and PATCH).
- You must enable this by specifying the allowed HTTP methods when you create the distribution.
- Interestingly, the cost of accepting (uploaded) data is usually less than for sending (downloaded) data.
- In its basic version, CloudFront supports SSL via the SNI extension to TLS, which is supported by all modern web browsers. If you need to support older browsers, you need to pay a few hundred dollars a month for dedicated IPs.
- đ¸âąConsider invalidation needs carefully. CloudFront does support invalidation of objects from edge locations, but this typically takes many minutes to propagate to edge locations, and costs $0.005 per request after the first 1000 requests. (Some other CDNs support this better.)
- Everyone should use TLS nowadays if possible. Ilya Grigorikâs table offers a good summary of features regarding TLS performance features of CloudFront.
- An alternative to invalidation that is often easier to manage, and instant, is to configure the distribution to cache with query strings and then append unique query strings with versions onto assets that are updated frequently.
- âąFor good web performance, itâs important turn on the option to enable compression on CloudFront distributions if the origin is S3 or another source that does not already compress.
- If using S3 as a backing store, remember that the endpoints for website hosting and for general S3 are different. Example: âbucketname.s3.amazonaws.comâ is a standard S3 serving endpoint, but to have redirect and error page support, you need to use the website hosting endpoint listed for that bucket, e.g. âbucketname.s3-website-us-east-1.amazonaws.comâ (or the appropriate region).
- đ Homepage â User guide â FAQ â Pricing
- Direct Connect is a private, dedicated connection from your network(s) to AWS.
- If your data center has a partnering relationship with AWS, setup is streamlined.
- Use for more consistent predictable network performance guarantees.
- 1 Gbps or 10 Gbps per link
- Use to peer your colocation, corporate, or physical datacenter network with your VPC(s).
- Example: Extend corporate LDAP and/or Kerberos to EC2 instances running in a VPC.
- Example: Make services that are hosted outside of AWS for financial, regulatory, or legacy reasons callable from within a VPC.
- đ Homepage â Developer guide â FAQ â Pricing
- Redshift is AWSâ managed data warehouse solution, which is massively parallel, scalable, and columnar. It is very widely used. It was built using ParAccel technology and exposes Postgres-compatible interfaces.
- âđŞWhatever data warehouse you select, your business will likely be locked in for a long time. Also (and not coincidentally) the data warehouse market is highly fragmented. Selecting a data warehouse is a choice to be made carefully, with research and awareness of the market landscape and what business intelligence tools youâll be using.
- Although Redshift is mostly Postgres-compatible, its SQL dialect and performance profile are different.
- Redshift supports only 12 primitive data types. (List of unsupported Postgres types)
- It has a leader node and computation nodes (the leader node distributes queries to the computation ones). Note that some functions can be executed only on the lead node.
- Major 3rd-party BI tools support Redshift integration (see Quora).
- Top 10 Performance Tuning Techniques for Amazon Redshift provides an excellent list of performance tuning techniques.
- Amazon Redshift Utils contains useful utilities, scripts and views to simplify Redshift ops.
- VACUUM regularly following a significant number of deletes or updates to reclaim space and improve query performance.
- ââąWhile Redshift can handle heavy queries well, its does not scale horizontally, i.e. does not handle multiple queries in parallel. Therefore, if you expect a high parallel load, consider replicating or (if possible) sharding your data across multiple clusters.
- đ¸Leader node, which manages communications with client programs and all communication with compute nodes, is the single point of failure.
- âąAlthough most Redshift queries parallelize well at the compute node level, certain stages are executed on the leader node, which can become the bottleneck.
- đšRedshift data commit transactions are very expensive and serialized at the cluster level. Therefore, consider grouping multiple mutation commands (COPY/INSERT/UPDATE) commands into a single transaction whenever possible.
- đšRedshift does not support multi-AZ deployments. Building multi-AZ clusters is not trivial. Here is an example using Kinesis.
- đ¸Beware of storing multiple small tables in Redshift. The way Redshift tables are laid out on disk makes it impractical. The minimum space required to store a table (in MB) is nodes * slices/node * columns. For example, on a 16 node cluster an empty table with 20 columns will occupy 640MB on disk.
- âą Query performance degrades significatly during data ingestion. WLM (Workload Management) tweaks help to some extent. However, if you need consistent read performance, consider having replica clusters (at the extra cost) and swap them during update.
- â Never resize a live cluster. The resize operation takes hours depending on the dataset size. In rare cases, the operation may also get stuck and you'll end up having a non-functional cluster. The safer approach is to create a new cluster from a snapshot, resize the new cluster and shut down the old one.
- Redshift has reserved keywords which are not present in Postgres (see full list here). Watch out for DELTA (Delta Encodings).
- Redshift does not support many Postgres functions, most notably several date/time-related and aggregation functions. See the full list here.
- đ Homepage â Release guide â FAQ â Pricing
- EMR (which used to stand for Elastic Map Reduce, but not anymore, since it now extends beyond map-reduce) is a service that offers managed deployment of Hadoop, HBase and Spark. It reduces the management burden of setting up and maintaining these services yourself.
- âMost of EMR is based open source technology that you can in principle deploy yourself. However, the job workflows and much other tooling is AWS-specific. Migrating from EMR to your own clusters is possible but not always trivial.
- EMR relies on many versions of Hadoop and other supporting software. Be sure to check which versions are in use.
- đ¸âEMR costs can pile up quickly since it involves lots of instances, efficiency can be poor depending on cluster configuration and choice of workload, and accidents like hung jobs are costly. See the section on EC2 cost management, especially the tips there about Spot instances and avoiding hourly billing. This blog post has additional tips.
- âąOff-the-shelf EMR and Hadoop can have significant overhead when compared with efficient processing on a single machine. If your data is small and performance matters, you may wish to consider alternatives, as this post illustrates.
- Python programmers may want to take a look at Yelpâs mrjob.
- It takes time to tune performance of EMR jobs, which is why third-party services such as Quboleâs data service are gaining popularity as ways to improve performance or reduce costs.
This section covers tips and information on achieving high availability.
- AWS offers two levels of redundancy, regions and availability zones (AZs).
- When used correctly, regions and zones do allow for high availability. You may want to use non-AWS providers for larger business risk mitigation (i.e. not tying your company to one vendor), but reliability of AWS across regions is very high.
- Multiple regions: Using multiple regions is complex, since itâs essentially like completely separate infrastructure. It is necessary for business-critical services which highest levels of redundancy. However, for many applications (like your average consumer startup), deploying extensive redundancy across regions may be overkill.
- The High Scalability Blog has a good guide to help you understand when you need to scale an application to multiple regions.
- đšMultiple AZs: Using AZs wisely is the primary tool for high availability!
- A typical single-region high availability architecture would be to deploy in two or more availability zones, with load balancing in front, as in this AWS diagram.
- The bulk of outages in AWS services affect one zone only. There have been rare outages affecting multiple zones simultaneously (for example, the great EBS failure of 2011) but in general most customersâ outages are due to using only a single AZ for some infrastructure.
- Consequently, design your architecture to minimize the impact of AZ outages, especially single-zone outages.
- Deploy key infrastructure across at least two or three AZs. Replicating a single resource across more than three zones often wonât make sense if you have other backup mechanisms in place, like S3 snapshots.
- A second or third AZ should significantly improve availability, but additional reliability of 4 or more AZs may not justify the costs or complexity (unless you have other reasons like capacity or Spot market prices).
- đ¸Watch out for cross-AZ traffic costs. This can be an unpleasant surprise in architectures with large volume of traffic crossing AZ boundaries.
- Deploy instances evenly across all available AZs, so that only a minimal fraction of your capacity is lost in case of an AZ outage.
- If your architecture has single points of failure, put all of them into a single AZ. This may seem counter-intuitive, but it minimizes the likelihood of any one SPOF to go down on an outage of a single AZ.
- EBS vs instance storage: For a number of years, EBSs had a poorer track record for availability than instance storage. For systems where individual instances can be killed and restarted easily, instance storage with sufficient redundancy could give higher availability overall. EBS has improved, and modern instance types (since 2015) are now EBS-only, so this approach, while helpful at one time, may be increasingly archaic.
- Be sure to use and understand CLBs/ALBs appropriately. Many outages are due to not using load balancers, or misunderstanding or misconfiguring them.
- AZ naming differs from one customer account to the next. Your âus-west-1aâ is not the same as another customerâs âus-west-1aâ â the letters are assigned to physical AZs randomly per account. This can also be a gotcha if you have multiple AWS accounts.
- Cross-AZ traffic is not free. At large scale, the costs add up to a significant amount of money. If possible, optimize your traffic to stay within the same AZ as much as possible.
- AWS offers a free tier of service, that allows very limited usage of resources at no cost. For example, a micro instance and small amount of storage is available for no charge. (If you have an old account but starting fresh, sign up for a new one to qualify for the free tier.) AWS Activate extends this to tens of thousands of dollars of free credits to startups in certain funds or accelerators.
- You can set billing alerts to be notified of unexpected costs, such as costs exceeding the free tier. You can set these in a granular way.
- AWS offers Cost Explorer, a tool to get better visibility into costs.
- Unfortunately, the AWS console and billing tools are rarely enough to give good visibility into costs. For large accounts, the AWS billing console can time out or be too slow to use.
- Tools:
- đšEnable billing reports and install an open source tool to help manage or monitor AWS resource utilization. Netflix Ice is probably the first one you should try. Check out docker-ice for a Dockerized version that eases installation.
- đ¸One challenge with Ice is that it doesnât cover amortized cost of reserved instances.
- Other tools include Security Monkey and Cloud Custodian.
- Third-party services: Several companies offer services designed to help you gain insights into expenses or lower your AWS bill, such as OpsClarity, Cloudability, CloudHealth Technologies, and ParkMyCloud. Some of these charge a percentage of your bill, which may be expensive. See the market landscape.
- AWSâs Trusted Advisor is another service that can help with cost concerns.
- Donât be shy about asking your account manager for guidance in reducing your bill. Itâs their job to keep you happily using AWS.
- Tagging for cost visibility: As the infrastructure grows, a key part of managing costs is understanding where they lie. Itâs strongly advisable to tag resources, and as complexity grows, group them effectively. If you set up billing allocation appropriately, you can then get visibility into expenses according to organization, product, individual engineer, or any other way that is helpful.
- If you need to do custom analysis of raw billing data or want to feed it to a third party cost analysis service, enable the detailed billing report feature.
- Multiple Amazon accounts can be linked for billing purposes using the Consolidated Billing feature. Large enterprises may need complex billing structures depending on ownership and approval processes.
- For deployments that involve significant network traffic, a large fraction of AWS expenses are around data transfer. Furthermore, costs of data transfer, within AZs, within regions, between regions, and into and out of AWS and the internet vary significantly depending on deployment choices.
- Some of the most common gotchas:
- đ¸AZ-to-AZ traffic: Note EC2 traffic between AZs is effectively the same as between regions. For example, deploying a Cassandra cluster across AZs is helpful for high availability, but can hurt on network costs.
- đ¸Using public IPs when not necessary: If you use an Elastic IP or public IP address of an EC2 instance, you will incur network costs, even if it is accessed locally within the AZ.
- This figure gives an overview:
- With EC2, there is a trade-off between engineering effort (more analysis, more tools, more complex architectures) and spend rate on AWS. If your EC2 costs are small, many of the efforts here are not worth the engineering time required to make them work. But once you know your costs will be growing in excess of an engineerâs salary, serious investment is often worthwhile.
- đšSpot instances:
- EC2 Spot instances are a way to get EC2 resources at significant discount â often many times cheaper than standard on-demand prices â if youâre willing to accept the possibility that they be terminated with little to no warning.
- Use Spot instances for potentially very significant discounts whenever you can use resources that may be restarted and donât maintain long-term state.
- The huge savings that you can get with Spot come at the cost of a significant increase in complexity when provisioning and reasoning about the availability of compute capacity.
- Amazon maintains Spot prices at a market-driven fluctuating level, based on their inventory of unused capacity. Prices are typically low but can spike very high. See the price history to get a sense for this.
- You set a bid price high to indicate how high youâre willing to pay, but you only pay the going rate, not the bid rate. If the market rate exceeds the bid, your instance may be terminated.
- Prices are per instance type and per availability zone. The same instance type may have wildly different price in different zones at the same time. Different instance types can have very different prices, even for similarly powered instance types in the same zone.
- Compare prices across instance types for better deals.
- Use Spot instances whenever possible. Setting a high bid price will assure your machines stay up the vast majority of the time, at a fraction of the price of normal instances.
- Get notified up to two minutes before price-triggered shutdown by polling your Spot instancesâ metadata.
- Make sure your usage profile works well for Spot before investing heavily in tools to manage a particular configuration.
- Spot fleet:
- You can realize even bigger cost reductions at the same time as improvements to fleet stability relative to regular Spot usage by using Spot fleet to bid on instances across instance types, availability zones, and (through multiple Spot Fleet Requests) regions.
- Spot fleet targets maintaining a specified (and weighted-by-instance-type) total capacity across a cluster of servers. If the Spot price of one instance type and availability zone combination rises above the weighted bid, it will rotate running instances out and bring up new ones of another type and location up in order to maintain the target capacity without going over target cluster cost.
- Spot usage best practices:
- Application profiling:
- Profile your application to figure out its runtime characteristics. That would help give an understanding of the minimum cpu, memory, disk required. Having this information is critical before you try to optimize spot costs.
- Once you know the minimum application requirements, instead of resorting to fixed instance types, you can bid across a variety of instance types (that gives you higher chances of getting a spot instance to run your application).E.g., If you know that 4 cpu cores are enough for your job, you can choose any instance type that is equal or above 4 cores and that has the least Spot price based on history. This helps you bid for instances with greater discount (less demand at that point).
- Spot price monitoring and intelligence:
- Spot Instance prices fluctuate depending on instance types, time of day, region and availability zone. The AWS CLI tools and API allow you to describe Spot price metadata given time, instance type, and region/AZ.
- Based on history of Spot instance prices, you could potentially build a myriad of algorithms that would help you to pick an instance type that either
- optimizes cost
- maximizes availability
- offers predictable performance
- You can also track the number of times an instance of certain type got taken away (out bid) and plot that in graphite to improve your algorithm based on time of day.
- Spot machine resource utilization:
- For running spiky workloads (spark, map reduce jobs) that are schedule based and where failure is non critical, Spot instances become the perfect candidates.
- The time it takes to satisfy a Spot instance could vary between 2-10 mins depending on the type of instance and availability of machines in that AZ.
- If you are running an infrastructure with hundreds of jobs of spiky nature, it is advisable to start pooling instances to optimize for cost, performance and most importantly time to acquire an instance.
- Pooling implies creating and maintaining Spot instances so that they do not get terminated after use. This promotes re-use of Spot instances across jobs. This of course comes with the overhead of lifecycle management.
- Pooling has its own set of metrics that can be tracked to optimize resource utilization, efficiency and cost.
- Typical pooling implementations give anywhere between 45-60% cost optimizations and 40% reduction in spot instance creationg time.
- An excellent example of Pooling implementation described by Netflix (part1, part2)
- Application profiling:
- Spot management gotchas
- đ¸Lifetime: There is no guarantee for the lifetime of a Spot instance. It is purely based on bidding. If anyone outbids your price, the instance is taken away. Spot is not suitable for time sensitive jobs that have strong SLA. Instances will fail based on demand for Spot at that time. AWS does not send any signal that the instance is going away, except for the fact that it is going down. That makes it hard to figure out why the instance(s) went down.
- đšAPI return data: - The Spot price API returns Spot prices of varying granularity depending on the time range specified in the api call.E.g If the last 10 min worth of history is requested, the data is more fine grained. If the last 2 day worth of history is requested, the data is more coarser. Do not assume you will get all the data points. There will be skipped intervals.
- âLifecycle management: Do not attempt any fancy Spot management unless absolutely necessary. If your entire usage is only a few machines and your cost is acceptable and your failure rate is lower, do not attempt to optimize. The pain for building/maintaining it is not worth just a few hundred dollar savings.
- Reserved Instances: allow you to get significant discounts on EC2 compute hours in return for a commitment to pay for instance hours of a specific instance type in a specific AWS region and availability zone for a pre-established time frame (1 or 3 years). Further discounts can be realized through âpartialâ or âall upfrontâ payment options.
- Consider using Reserved Instances when you can predict your longer-term compute needs and need a stronger guarantee of compute availability and continuity than the (typically cheaper) Spot market can provide. However be aware that if your architecture changes your computing needs may change as well so long term contracts can seem attractive but may turn out to be cumbersome.
- Instance reservations are not tied to specific EC2 instances â they are applied at the billing level to eligible compute hours as they are consumed across all of the instances in an account.
- Convertible Reserved Instances are a new form of Reserved Instance that allows changes during the term. Convertible RIs are more flexible than Standard RIs in exchange for a smaller savings rate. Use these if you can make a long-term commitment (3 years) but want the flexiblity to move to new architectures or take advantage of future price discounts.
- Consider setting the âRegional Benefitâ attribute of an RI if you don't need a capacity guarantee. The upside of this is your RI will potentially be used more, especially if you have a lot of compute inside of one Region. Instance reservations are not tied to specific EC2 instances - they are applied at the billing level to eligible compute hours as they are consumed across all of the instances in an account.
- Hourly billing waste: EC2 instances are billed in instance-hours â rounded up to the nearest full hour! For long-lived instances, this is not a big worry, but for large transient deployments, like EMR jobs or test deployments, this can be a significant expense. Never deploy many instances and terminate them after only a few minutes. In fact, if transient instances are part of your regular processing workflow, you should put in protections or alerts to check for this kind of waste.
- If you have multiple AWS accounts and have configured them to roll charges up to one account using the âConsolidated Billingâ feature, you can expect unused Reserved Instance hours from one account to be applied to matching (region, availability zone, instance type) compute hours from another account.
- If you have multiple AWS accounts that are linked with Consolidated Billing, plan on using reservations, and want unused reservation capacity to be able to apply to compute hours from other accounts, youâll need to create your instances in the availability zone with the same name across accounts. Keep in mind that when you have done this, your instances may not end up in the same physical data center across accounts - Amazon shuffles availability zones names across accounts in order to equalize resource utilization.
- Make use of dynamic Auto Scaling, where possible, in order to better match your cluster size (and cost) to the current resource requirements of your service.
This section covers a few unusually useful or âmust know aboutâ resources or lists.
- AWS
- AWS In Plain English: A readable overview of all the AWS services
- Awesome AWS: A curated list of AWS tools and software
- AWS Tips I Wish I'd Known Before I Started: A list of tips from Rich Adams
- General references
- Awesome Microservices: A curated list of tools and technologies for microservice architectures. Worth browsing to learn about popular open source projects.
- Is it fast yet?: Ilya Grigorikâs TLS performance overview
- High Performance Browser Networking: A full, modern book on web network performance; a presentation on the HTTP/2 portion is here.
The authors and contributors to this content cannot guarantee the validity of the information found here. Please make sure that you understand that the information provided here is being provided freely, and that no kind of agreement or contract is created between you and any persons associated with this content or project. The authors and contributors do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions in the information contained in, associated with, or linked from this content, whether such errors or omissions result from negligence, accident, or any other cause.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.