AWS Design Principles & Architecture
- Amazon Web Services (AWS) - History
- AWS 11 Design Principles - Defined
- Well-Architected Framework - 5 Pillars (in brief)
- AWS Architecture - PDFs
- AWS Architecture - Links
- High Availability vs Fault Tolerance
- AWS High Availability & Fault Tolerance - Links
- AWS Database High Availability - Explained
- AWS Migration Overview, Guide, Checklist - PDFs
- AWS Database Migration - Links
Amazon Web Services (AWS) – History Timeline
2003
• Chris Pinkham & Benjamin Black present White Paper on what Amazon’s Internal Infrastructure should look like
• They Suggest Selling it as a Service and prepared a Business Case
2004
• SQS Official Launch
2006
• Official Launch of Amazon Web Services (AWS)
• Completely Self-serve, only 3 Salespeople
2007
• 180,000 Developers on the Platform
• Began Builder the AWS Outward Facing “Field” Team
• Salespeople
• Account Managers
• Professional Services
• Solutions Architects
• Technical Account Management Support
2010
• All of Amazon.com Moved Over to AWS
2012
• 1st re:Invent Conference – 5,000 Attendees
2013
• AWS Certifications Launched
2014
• Commited to Achieve 100% Renewable Energy Usage for its Global Footprint
2015
• AWS Breaks Out It’s Revenue: $6B/annually and 90% Y/Y Growth Rate
2016
• AWS Run Rate of $13B
2017
• AWS Run Rate of $27B
• re:Invent releases a host of AI Services
2018
• re:Invent Conference – 44,000 Attendees
• AWS Launches ML Specialty Certifications
• Heavy Focus on Automating AI & ML
• AWS delivered most of Amazon’s operating income
AWS 11 Design Principles – Definitions Link
• The AWS Cloud includes many design patterns and architectural options that you can apply to a wide variety of use cases.
• Some key design principles of the AWS Cloud include scalability, disposable resources, automation, loose coupling managed services instead of servers, and flexible data storage options.
1. Scalability
2. Disposable Resources Instead of Fixed Servers
3. Automation
4. Loose Coupling
5. Services, Not Servers
6. Databases
7. Managing Increasing Volumes of Data
8. Removing Single Points of Failure
9. Optimize for Cost
10. Caching
11. Security
AWS 11 Design Principles
• The AWS Cloud includes many design patterns and architectural options that you can apply to a wide variety of use cases. Some key design principles of the AWS Cloud include scalability, disposable resources, automation, loose coupling managed services instead of servers, and flexible data storage options.
1. Scalability
• Systems that are expected to grow over time need to be built on top of a scalable architecture. Such an architecture can support growth in users, traffic, or data size with no drop-in performance.
• There are generally two ways to scale an IT architecture: vertically and horizontally.
. 1. Scaling Vertically
Scaling vertically takes place through an increase in the specifications of an individual resource, such as upgrading a server with a larger hard drive or a faster CPU.
. 2. Scaling Horizontally
Scaling horizontally takes place through an increase in the number of resources, such as adding more hard drives to a storage array or adding more servers to support an application. This is a great way to build internet-scale applications that leverage the elasticity of cloud computing.
2. Disposable Resources Instead of Fixed Servers
• In a traditional infrastructure environment, you have to work with fixed resources because of the upfront cost and lead time of introducing new hardware. This drives practices such as manually logging in to servers to configure software or fix issues, hardcoding IP addresses, and running tests or processing jobs sequentially. When designing for AWS, you can take advantage of the dynamically provisioned nature of cloud computing. You can think of servers and other components as temporary resources. You can launch as many as you need, and use them only for as long as you need them.
3. Automation
• In a traditional IT infrastructure, you often have to manually react to a variety of events. When deploying on AWS, there is an opportunity for automation, so that you improve both your system’s stability and the efficiency of your organization. Consider introducing one or more of these types of automation into your application architecture to ensure more resiliency, scalability, and performance.
4. Loose Coupling
• As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies—a change or a failure in one component should not cascade to other components.
5. Services, Not Servers
• Developing, managing, and operating applications, especially at scale, requires a wide variety of underlying technology components. With traditional IT infrastructure, companies would have to build and operate all those components. AWS offers a broad set of compute, storage, database, analytics, application, and deployment services that help organizations move faster and lower IT costs. Architectures that do not leverage that breadth (e.g., if they use only Amazon EC2) might not be making the most of cloud computing and might be missing an opportunity to increase developer productivity and operational efficiency.
• Managed Services
• AWS managed services provide building blocks that developers can consume to power their applications. These managed services include databases, machine learning, analytics, queuing, search, email, notifications, and more. For example, with Amazon SQS you can offload the administrative burden of operating and scaling a highly available messaging cluster, while paying a low price for only what you use. Amazon SQS is also inherently scalable and reliable. The same applies to Amazon S3, which enables you to store as much data as you want and access it when you need it, without having to think about capacity, hard disk configurations, replication, and other related issues.
6. Databases
• With traditional IT infrastructure, organizations are often limited to the database and storage technologies they can use. There can be constraints based on licensing costs and the ability to support diverse database engines. On AWS, these constraints are removed by managed database services that offer enterprise performance at opensource cost. As a result, it is not uncommon for applications to run on top of a polyglot data layer choosing the right technology for each workload.
• Choose the Right Database Technology for Each Workload
7. Managing Increasing Volumes of Data
• Traditional data storage and analytics tools can no longer provide the agility and flexibility required to deliver relevant business insights. That’s why many organizations are shifting to a data lake architecture. A data lake is an architectural approach that allows you to store massive amounts of data in a central location so that it’s readily available to be categorized, processed, analyzed, and consumed by diverse groups within your organization. Since data can be stored as-is, you do not have to convert it to a predefined schema, and you no longer need to know what questions to ask about your data beforehand. This enables you to select the correct technology to meet your specific analytical requirements.
8. Removing Single Points of Failure
• Production systems typically come with defined or implicit objectives for uptime. A system is highly available when it can withstand the failure of an individual component or multiple components, such as hard disks, servers, and network links. To help you create a system with high availability, you can think about ways to automate recovery and reduce disruption at every layer of your architecture.
9. Optimize for Cost
• When you move your existing architectures into the cloud, you can reduce capital expenses and drive savings as a result of the AWS economies of scale. By iterating and using more AWS capabilities, you can realize further opportunity to create cost optimized cloud architectures.
10. Caching
• Caching is a technique that stores previously calculated data for future use. This technique is used to improve application performance and increase the cost efficiency of an implementation. It can be applied at multiple layers of an IT architecture.
11. Security
• Most of the security tools and techniques that you might already be familiar with in a traditional IT infrastructure can be used in the cloud. At the same time, AWS allows you to improve your security in a variety of ways. AWS is a platform that allows you to formalize the design of security controls in the platform itself. It simplifies system use for administrators and your IT department, and makes your environment much easier to audit in a continuous manner.
AWS Well-Architected Framework – 5 Pillars
1. Security
• Focuses on protecting information & systems
• Confidentiality and integrity of data, identifying and managing who can do what with privilege management, protecting systems, and establishing controls to detect security events
2. Reliability
• Focuses on the ability to prevent, and quickly recover from failures to meet business and customer demand
• Foundational elements around setup, cross project requirements, recovery planning, and how we handle change
3. Performance Efficiency
• Focuses on using IT and computing resources efficiently
• Selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve
4. Cost Optimization
• Focuses on avoiding un-needed costs
• Understanding and controlling where money is being spent, selecting the most appropriate and right number of resource types, analyzing spend over time, and scaling to meet business needs without overspending
5. Operational Excellence
• Focuses on running and monitoring systems to deliver business value, and continually improving processes & procedures Key topics include managing and automating changes, responding to events, and defining standards to successfully manage daily operations.
AWS Architecture – PDF Downloads:
• Architecting for the Cloud – AWS Best Practices – 2018
• AWS Cloud Adoption Framework – 2017
• AWS Infrastructure as Code – 2017
• AWS Well-Architected Framework (WAF) – 2018
• WAF Pillar 1 – Security – 2018
• WAF Pillar 2 – Reliability – 2018
• WAF Pillar 3 – Performance Efficiency – 2018
• WAF Pillar 4 – Cost Optimization – 2018
• WAF Pillar 5 – Operational Excellence – 2018
• WAF Lens 1 – High Performance Computing – 2018
• WAF Lens 3 – Internet of Things IoT – 2018
• AWS Disaster Recovery (DR) – 2014
• AWS Advanced Architectures for Oracle Database on EC2 – 2016
AWS Architecture – Links:
• About AWS
• AWS Global Infrastructure
• AWS Architecture Center
• AWS Well Architected
• AWS Well-Architected Framework (WAF) – 5 Pillars – PDF
• AWS Well-Architected Framework (WAF) – 5 Pillars – Blog
• AWS Well Architected Framework (WAF) – 5 Pillars – know for exams
• AWS Disaster Recovery (DR) – know for exams
• AWS Disaster Recovery (DR) – know for exams
• AWS Risk & Compliance (Shared Responsibility model) – know for exams
• AWS Answers – Technical Briefs
• AWS Quick Starts (Automated, Gold-standard Deployments)
Fault Tolerance Is Not High Availability <- DZone.com article link
High Availability (HA) vs Fault Tolerance (FT)
• While not instantaneous, services are restored rapidly, often in less than a minute.
• The difference between Fault Tolerance and High Availability is that a fault tolerant environment has no service interruption but a significantly higher cost, while a highly available environment has a minimal service interruption.
AWS High Availability
• High Availability is the fundamental feature of building software solutions in a cloud environment.
• Traditionally high availability has been a very costly affair but now with AWS, one can leverage a number of AWS services for high availability or potentially “always availability” scenario.
Fault Tolerance
• You can think of Fault Tolerance (FT) as a less strict version of High Availability (HA).
• The latter was all about keeping the offline time of your platform to the minimum and always trying to keep performance unaffected.
• Now, with FT, we will again, try to minimize downtime but performance will not be a concern, in fact, you could say that a degraded performance is going to be expected.
• High Availability (HA) – WikiPedia Definition
• Fault Tolerance (FT) – WikiPedia Definition
• Latency – WikiPedia Definition
• Concurrency – WikiPedia Definition
High Availability & Fault Tolerance – Free AWS Tutorial:
• Increase the Availability of Your Application on Amazon EC2
High Availability & Fault Tolerance – Links:
• AWS Well-Architected Framework – Design Principles
• AWS High Availability (HA) & Fault Tolerance (FT) Architecture
• Building Fault-Tolerant Applications on AWS – 2011
• Designing Web Apps for High Availability in AWS – 2018
• AWS High Availability Tips – 2018
• Designing Failover Architectures in EC2
Database High Availability on AWS – Explained
. • by Slavik Dimitrovich, Solutions Architect, AWS
1. High Availability for Mere Mortals
2. Distributed Data Stores for Mere Mortals
3. Picking the Right Data Store for Your Workload
AWS DMS Links:
• What Is AWS Database Migration Service (DMS)?
• AWS Database Migration Service – Overview
• AWS Database Migration Service – Getting Started
• How to solve some common challenges faced while migrating from Oracle to PostgreSQL
• AWS Documentation Search results: database migration guide
AWS Project (~2 hours each) Links:
• Migrate from Oracle to Amazon Redshift
• Migrate from Oracle to Amazon Aurora
• Create and Manage a Nonrelational Database with Amazon DynamoDB