Starting the AWS cloud journey 

Florian Bauer
21. September 2023
Reading time: 2 min
Starting the AWS cloud journey 

… a real-life fairy tale.  

AWS Cloud to the rescue 

Once upon a time there was a company which found out that the so far monolithic approach of application development caused way too much friction and involved too many developers with real-time troubleshooting ever so often because some of the applications would slightly crack under a certain amount of user load. 

The application stack of the monolith incorporated some good principles, but it also required to always use the same three-tiered tech stack for implementing new applications: 

  • Create a .NET web application, hosted in IIS 
  • Back it with a .NET service application, hosted in IIS 
  • Use a Microsoft SQL database to store data 

Developing an application required version control, so the same repository was used over and over again (which nowadays still makes me shudder if someone mentions “Monorepo” although I know they don’t mean what I remember). Built on this Mega-Monorepo there were CI/CD Pipelines, monolithic testing environments (Webservers plus Database Servers). 

All these mentioned factors screamed for drastic improvement. This is where we started with AWS. We cherry picked some applications from the old Mega-Monorepo stack to rethink and transition them to a cloud-based approach. 

Moving CMS pipeline to AWS mostly 

In a first approach we cherry picked and then reimagined the CMS pipeline. There should be a CMS Data plane that is required for the transport of data from the backoffice CMS to AWS where we want present the content to our clients. Furthermore the CMS Presentation plane will take care of handling user requests and enriching the plain content with everything needed. To increase application performance we use caching, a CDN for caching web application responses and Redis for caching within the application. 

CMS Data plane 

For the CMS Data plane, we apply a hook within the CMS to trigger on change. This puts a message per change in an SQS Message Queue. An AWS Lambda handles messages from the queue, checks them against a JSON schema and uploads the document to an OpenSearch cluster. 


CMS Presentation plane 

In the CMS Presentation plane we planned with the following user request journey: 


1. A user requests the website. 

2. The request is routed to CloudFront CDN which is used for routing requests and for caching responses. 

2.1 CloudFront is configured to respond to static assets requests from an S3 Bucket to reduce the amount of requests handled by the application itself. 

3. Application requests are then handled by an Application Load Balancer (ALB) that routes the requests to ECS Fargate containers. 

3.1 Both CloudFront and Application Load Balancer are additionally protected by AWS WAF service (AWS Web Application Firewall). 

4. An ECS Fargate container consumes data from the OpenSearch cluster (mentioned in chapter CMS Data plane), transforms the raw content to HTML with CSS and JavaScript, enriches the content with forum and community information and responds with the result. 

5. The response is routed back, cached in CloudFront and delivered to the user. 

CMS pipeline Solution Summary 

In the presented solution we move the workload away from On Premises to AWS as soon as we can. From there we can rely on the tooling AWS provides “out of the box” like CloudFormation for IaC, CloudWatch Logging, CloudWatch Metrics and CloudWatch Alarms. Furthermore we can adopt CloudWatch Canaries for basic tests ensuring that the CMS Presentation plane works as expected. 


Initially we had set our goals to create a new CMS pipeline with high reliability, reduced maintenance effort and with an eye on Cost Optimization. All components of our new pipeline were selected with resiliency in mind (e.g. auto healing, auto scaling to name a few). 

We achieved a drastic server maintenance effort reduction by heavily relying on “serverless services” like CloudFront, S3, ALB, WAF, Fargate, OpenSearch, SQS and Lambda. Additionally we reduced our monitoring and troubleshooting efforts by leveraging CloudWatch services like CloudWatch Logs, CloudWatch Metrics and CloudWatch Alarms. 
We learned about the principles and benefits of IaC, especially with CloudFormation and in combination with CICD pipelines. With CloudFormation Templates we were able to create new accounts for testing and experimentation within a few hours (e.g. deploy an application load balancer in 8-10 minutes; deploy an RDS database cluster within 6-8 minutes; deploy other services within minutes). 

We got acquainted with AWS services and experienced first-hand how great AWS cloud services can handle our workloads in an efficient way with transparent cost control¹. 
With the AWS Certified Cloud Practitioner training we deepened our foundations about AWS cloud services and with the certification we additionally increased our job portfolio value. 
To reduce AWS runtime cost as well as maintenance cost we increasingly utilized AWS services which follow a serverless approach. 

¹) Once, in a different AWS project we had a slip in our bill by accidentally having activated extended logging in Cognito. AWS Support was very helpful and accommodating there though. 

There were also many business benefits that resulted from this project. By changing the dependencies in the architecture of the application we moved many application errors away from the customer display. 

A failure in CMS Data plane or the connected CMS application would only impact content updates but not content already processed. 

In CMS Presentation plane, an error in ALB or in Fargate containers would cause the CDN to deliver stale content. For the customer it only looks like there are no updates at that moment. 

To sum it up, we implemented a resilient, partially serverless solution that proved to be a very stable setup which also keeps the customer away from internal issues. 

We found confidence in AWS services to tackle many further projects that turn out successful as well!