For everyone working in the world of AWS continuous learning is an essential practice. An AWS Game Day is an excellent opportunity for a team to join a “collaborative learning exercise that tests skills in implementing AWS solutions to solve real-world problems in a gamified, risk-free environment”. A Game Day usually happens in a team of 2-4 players, so that everyone can contribute their know-how and learn something from others.
During the event, each team is assigned to the same mission within the fictional setup. Scoring points are based on implementing AWS solutions while applying best practices, like scalability, security, resilience, and cost-efficiency. A live scoreboard is available to all participants showing the current ranking for each team.
This edition of the AWS Game Day, for which over 60 teams have registered, is all about Migration and Modernization. The Game Day took place at the end of February and was hosted by AWS itself online via AWS Chime and an accompanying livestream in Twitch. Registration took place a few days beforehand, and to end up in the same team as friends and colleagues, it is better to look twice when choosing a team name. 😉
To be prepared for the event, it is advisable to look at the Migration and Modernization documentation provided. Here you will find an overview of the AWS services used in the event and a few short introductory videos on each of them.
Game Day started with a slight confusion about the time zone. We spent an hour in the waiting area, which gave us some time to chat with AWS support to check some available information and update our team members (yes, we somehow managed to enter different team names during the registration). At 11 a.m. sharp, the live stream on Twitch started. We were given information about the use case, the course of the game, including the game dashboard and the following rules and tips:
The Game Dashboard is also the central place for all information needed around the Game Day. It provides a list of all credentials required, the login for the AWS console, an Access Key pair for the CLI, and the CloudEndure and Database credentials.
Equipped with the necessary information, we could start right away.
Now to the use case itself. The company we work for (Unicorn. Rentals) wants to become the unicorn business’s most significant player. To achieve this goal, they acquired the online platform BuyMyUnicorn.com. From then on, everything went bananas. They laid off the entire IT department from one day to the other. The lease for the data center was not renewed and thus expires within the next 6 hours. And to top it all off, the former management of BuyMyUnicorns.com is no longer around because they got a s***load of a paycheck and bailed. Our goal is to migrate and modernize several workloads to the AWS cloud with nothing but a few notes from the former BuyMyUnicorns IT team (available in the form of a workshop).
This, or rather similar, is how we were introduced to the Game Day.
A VPC is a fundamental component of a scalable and highly available architecture on the AWS platform and a fundamental step of any migration. It is crucial for anyone planning to move workloads to AWS to understand the underlying components such as VPC and subnets. The optimal solution from AWS has two availability zones, each with one public and two private subnets. A diagram of the architecture could look like the following.
Experienced players should be able to build this out using the console (recommended by us). Newer players on the AWS platform can use the VPC wizard and the available Quick Start Guides on the AWS website.
One of the first steps you’re asked for is the migration of the on-premise MySQL database. You can perform this task using the AWS Database Migration Service, which helps you migrate databases to and from AWS quickly and securely. One advantage is that the source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. Not only that, with the right configuration, the data can be replicated continuously with high availability. All changes to the source database are replicated to the target database in real-time until the migration is complete.
To move the on-prem MySQL database to an AWS, we created an Amazon Aurora database with MySQL compatibility for availability and performance reasons. This can be done with just a few clicks from within the console.
After that, the actual migration takes place. An AWS DMS migration consists of several components: a replication instance, source- and target endpoints, and a replication task. You create an AWS DMS migration by starting the necessary replication instance, endpoints and aggregate all of it in a database migration tasks in an AWS Region.
Since our team consisted of two members, and the migration task can be done independently, we decided to split the work from the beginning. The migration of the webserver is pretty straightforward once the basic VPC setup with the necessary access rights is available. For the webserver migration, AWS offers CloudEndure. An easy-to-understand tool enables organizations to migrate workloads to Amazon Web Services (AWS) without disruption of the service. Through continuous replication, automated machine conversion, and application stack orchestration, CloudEndure Migration simplifies the migration process and reduces the potential for human error.
The process is pretty much the same compared with the database migration, and everything is meticulously written down within the CloudEndure documentation. Most of the steps are directly managed via the CloudEndure dashboard, where you set up the replication instance, the blueprint’s definition, and the monitoring of the actual server migration. The only manual thing we had to do was the installation of the CloudEndure agent on the source machine. Once the process is understood, this can be done in a few minutes. Kudos to the CloudEndure team for making server migration so easy hassle-free.
Bringing the existing on-premise servers into your AWS cloud are the initial steps. But what happens next? Think about improvements you can make in terms of availability, scalability, and security. How do you manage incoming traffic? Is there maybe some monitoring you could use to check the incoming traffic of your homepage. How can you protect your environment against common web exploits? Experiment with new tools you may have never used before. The console is pretty self-explanatory, and if you get stuck, the AWS documentation and StackOverflow are your best friends.
One final tip that may not be too obvious. Do not rush your site’s transition to the AWS environment. Make it resilient, secure, and scalable at first. Your company is obviously the biggest unicorn provider in the market, and on top of that, a large marketing campaign is scheduled for later in the day. So be prepared.
In the end, and with a little of luck, we managed to land third place overall. This is a fantastic result, which we are happy with, and as we were told, we can now able to look forward to a small surprise from AWS. Could it be a little unicorn?
But even without that, I am not exaggerating when I say we had an awesome time with a lot of fun, and we were able to learn so much. The time flew by, and we would have liked to try out and improve so much more. We would definitely join another Game Day if we have the opportunity to do so.
And so should you!