Migrating Classic Website Architecture to AWS

Classic Architecture Burdens

Cost: In a traditional hosting model, you have to provision servers to handle the peak capacity. The system is consuming resources and wasted outside of these peak times.

Handling Unexpected Peaks: You may not respond in time to unexpected traffic spikes.

Reliability: System reliability is depending on the redundancy for both servers, networking devices, and connectivity which increase your total cost of ownership.

Maintenance Costs: The lifecycle of hardware is 3 years and costs to migrate the system to a new one is a time consuming task.

Time to implementation: Provision a server takes more than 1-2 months depending on order, deployment, and implementation

Development Lifecycle: Hardware costs for development, staging, and pre-prod should be created and sits unused for long periods , and it is impossible to make optimizations.

New Architecture Proposed

The following sections are the key components of a new design architecture deployed in AWS Cloud and explain the differences from traditional hosting architecture. All the deployments will be automated with Nested CloudFormation templates so provisioning the new environment will spin up the VPC, Auto-Scaling Group Servers, Load Balancers, RDS DBs, and other services.


AWS can automatically scale the web/application fleet on-demand to handle the changes in your incoming traffic. Instances can be provisioned either based on your scheduled peak times or a set of triggers for scaling out and back in. Auto Scaling works with monitoring and observability service AWS CloudWatch for getting metrics from existing EC2 Instances and adding/removing instances for load distribution. You may define the scale-out policies if the web servers CPU Usage is greater than 80 percent over a particular time, additional servers could be deployed and added to the elastic load balancer to handle the increase.

Disaster Recovery :

Disaster Recovery/Business Continuity is depending on your RPO/RTO Objectives and total budget for reaching these goals. The lowest cost option is backup and restore data in case of region failure to the highest costly to another running environment on another region and route the traffic with Route53 by defining latency based routing.

The current design is running on multiple Availability Zones that gives you easy access to redundant deployment locations. The design is providing deployment EC2 hosts across multiple Availability Zones to make web application more fault-tolerant.

For backup/restore needs, EC2 instances can have EBS volumes attached. Where EBS is an excellent platform for block storage data, you can take snapshots of EBS Volumes and store them in Amazon S3.

EBS snapshots only backup changes since the previous one that reduces snapshot times for more frequent snapshots. You can also use an EBS snapshot as a baseline to replicate data across multiple EBS volumes and attaching those volumes to disaster site instances.

Amazon S3 provides a highly available and redundant object store and is a great solution to keep static content such as images, videos, and other static media. By enabling S3 Cross-Region replication, you may have a replicated object storage in case of a region failure.

Finally, for database redundancy, Amazon RDS Multi-AZ deployments increase the database availability and protect your data against unplanned outages.

Effective Load Distribution :

AWS provides load distribution through the webservers with Elastic Load Balancing service. ELB supports health checks on hosts and provides distribution of traffic to instances across multi-AZs. ELB supports sticky sessions to address routing needs.

For the Database layer, Amazon RDS provides the load distribution by adding RDS Read Replicas to provide read-only replicas of your database to scale out beyond the capacity of a single DB deployment for read-heavy database workload.

Shared Content Storage Efficiency :

AWS Elastic File System (EFS) provides a simple, scalable, fully managed elastic NFS file system to use for sharing data between several EC2 instances mainly used for database or consolidated logging storage. AWS EFS supports two performance modes, either general purpose or max I/O. By leveraging Max I/O EFS can scale to higher levels of throughput and performance.

Content Latency from S3 or EC2

Amazon S3 is a great solution to keep static content and also supports edge caching and streaming of these assets by interacting with CloudFront Service. CloudFront automatically routes requests for static content to the nearest edge location, so content delivered to the client with the best possible performance.

For the dynamic content, in-memory caches can reduce the load on services and improve performance and scalability on the database tier by caching frequently accessed information. Amazon ElasticCache is a web service protocol-compliant with MemCached and Redis that you can configure the in-memory cache you create to scale with load automatically and replaced failed nodes.

Self-Healing Infrastructure

AWS EC2 Autoscaling group automatically recognizes if there is any problem on EC2 Instances by monitoring with CloudWatch Alarms and stops routing traffic to that instances, terminates the instance and spin up a new instance depending on the configuration.

Likewise, by enabling RDS Multi-AZ support, when there is a problem that occurred in master RDS, AWS will automatically flip the CNAME for DB Instance to point at the standby.

Encryption data at Rest and in transit

To meet your organizational, legal, and compliance requirements, you have to define your encryption standards based on data classification. AWS provides multiple solutions for encrypting data at rest and in transit. You can protect your data in transit using SSL or client-side encryption. Amazon ELB uses SSL/TLS protocol for encrypted connections. This feature enables traffic encryption between ELB and the clients that initiate HTTPS sessions, and for connections between ELB and EC2. Between the traffic from EC2 to RDS, you can use SSL from the application to encrypt a connection to a DB instance.

For encryption in Rest, S3 provides default encryption on a bucket, so all objects are encrypted when they stored in the bucket either SSE-S3 or SSE-KMS. EBS Encryption can be enabled while creating Launch Configuration via CloudFormation template using AWS KMS Customer Master keys when creating encrypted volumes and snapshots. AWS RDS is also encrypted by defining in CloudFormation template.

Managing Ingestion of increased customer updates

We have several options to handle the bulk update of customer information. The simplest and cost-effective option depending on your code is using AWS Lambda service and trigger the job regularly with CloudWatch Events. For more complicated and multistage pipelines, it is better to use AWS Batch, which supports parallel execution on multiple nodes. AWS Batch can handle scheduling jobs, allocating resources, and following jobs stages.

Connection between on-premise DC and AWS

AWS DirectConnect will be used for a consistent and dedicated network connection from your premises to AWS. Using DirectConnect, you may have an option either 1 GB or 10GB depending on your usage needs.

By leveraging VPN as a backup, you may have automatic failover in case of a problem occurred in DirectConnect. AWS SSO is an AWS service that enables you to use your existing credentials from Microsoft AD to access your cloud-based applications.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s