Many companies create applications using serverless architecture on AWS.
They usually start with a pilot on a new project.
Then, convinced by the technology, they start looking into migrating their strategic legacy applications to a serverless architecture.
These applications may have the following characteristics:
- Built on legacy infrastructure, like on-premise servers
- Providing essential services for their users, service discontinuation is unacceptable
- Constantly evolving to meet new requirements
Let's talk about the challenges of migrating such applications to a serverless architecture on AWS cloud!
1. Defining The Target Serverless Architecture
This is the first step. Based on my own experience, this is not the hardest part of the migration plan. If you are migrating an API from a legacy server, then the target architecture may look like the following:
If you are not familiar with AWS serverless services, here is a quick explanation of each of these.
- Lambda: running code in the cloud, on-demand
- API Gateway: making lambda callable via an API
- Cognito: serverless authentication service
- Cloudwatch: monitoring, logs, and rules for scheduling lambdas
Of course, you may need other serverless architecture patterns than the simple one described in the picture above.
AWS released great resources to help you find the right ones! You can read the AWS Well-Architected Framework Serverless Application Lens, or have a quick look at all the patterns in the cdk-patterns/serverless GitHub repository.
With these resources at hand, you can sketch a first version of how the application architecture will look like after the migration to serverless.
2. How to Make the Switch to The Serverless Architecture
There are two main ways of doing that.
The Big Bang Approach
You build a separate, serverless version of the application. Then you tell your users to start using it when it is ready.
- you start building the serverless application from scratch, so you don't have to mind the peculiarities of the legacy system
- You need to implement all evolutions of the application twice as the project is in progress (in the legacy and the new application). Or you need to delay them until you release the new application.
- Before you release the application, in the end, it is not battle-tested.
- You take the risk of being overwhelmed with user feedbacks right after the release. This can potentially create a huge backlog of necessary improvements, although frequent user tests may mitigate this risk.
After the real, cosmic "Big Bang", everything was a mess and it took millions of years for things to settle. There is a risk of the same happening to your application if you take that path.
Still, you can take this approach if you can avoid changing the application for some time. You can have the two versions of the application running in parallel until you are confident enough to shut down the old one.
The Step by Step Migration to Serverless Approach
You replace the components of the application by serverless versions of them, one at a time, while your users use this hybrid version of the application
- As the migration is in progress, you can keep the application evolving
- You are more confident of the robustness of your application. You battle-test new components regularly during the migration process.
- There is no risky release at the end of the migration. You can take user feedback into account during the migration process
- This approach is more complex than the Big Bang approach. You need to have a clear strategy for creating a hybrid application that allows you to migrate it step by step while it is used in production.
This approach beats the Big Bang approach in everything except complexity. In many cases, the Big Bang approach is still more pragmatic. However, there are cases where you need the step by step approach.
3. How to Switch to a Serverless Architecture on AWS Step by Step
We did a serverless migration project for a client, and choose the step by step approach.
The Monolithic API
The whole thing consists of a mobile application (iOS and Android) connected to a backend via an API.
The backend was storing data in a PostgreSQL database. When I started working on it, a lift and shift migration to the AWS cloud had already been done.
A partial migration to serverless managed services had been done as well. The original application, written in Java, was already split into several Lambda functions.
The API Gateway handled HTTP requests and triggered the Lambda.
You might say, there was not much to do left.
However, this existing system corresponded totally to the stereotype of the "distributed monolith", also known as the Lambda Pinball! Look at the diagram below to get a feel of this anti-pattern.
This anti-pattern was not the only problem the API code had in the beginning. There were no tests, bugs, enormous code duplication, no ORM for interactions with the database, no Infrastructure as Code framework for deployment, 3 different languages...
But this anti-pattern is the most interesting to highlight because it is really connected with serverless.
Also, it can easily happen without noticing when beginning to use serverless technologies.
Here is what happens when the mobile application calls the PATCH method on the user/profile route.
Here, "calls" mean that the lambda makes a synchronous HTTP call that goes through the API Gateway, "invokes" means direct AWS API call to invoke a lambda.
Without warmup, the first call to this API route always times out.
The application was almost ready to go to production but needed some bug fixing and improvements.
It had to be put in production in a short time, so rewriting it from scratch was not an option.
The client did not want to wait either, because there would never be a "perfect time" for changing the architecture.
Feature requests would keep coming anyway. So we had to migrate the application while developing it, in a step by step way.
How We Carried Out The Migration
First, let's recap the goals of the migration:
- Have a fully serverless architecture: remove the EC2 instance running the Kong API Gateway
- Clean the microservice architecture to remove the Lamba Pinball problem (among others)
The diagram below shows the architecture pattern we used.
To do this, we used the proxy feature of AWS API Gateway. We put the whole API behind a /kong/* proxy route.
We shipped a first version of the application with all API calls done to the newly created API Gateway. Then, we migrated the application one route at a time, using the following process.
The Migration Process
- Create a new Lambda for the route, implementing the same function as the existing one.
- At the same time, we introduced things that lacked before: deployment using the serverless framework for Infrastructure as Code, tests, ORM, translate everything from Python 2, Java or Node to Python 3.7, etc.
- Releasing a staging application using the new lambda instead of the old one. This was as simple as removing the /kong prefix in the API calls to the corresponding route
- Checking that everything works with this staging application
- Once a while, thoroughly validate the staging app and put it on production
To avoid the Lambda Pinball anti-pattern, we choose to have exactly 1 Lambda per API route. This made each lambda slightly more complex but removed the architectural burden.
Another option that is now available is to use the recently released Event Bridge service. It lets you both keep your Lambda small and avoid Lambda Pinball.
Make sure you check it because it is really a game-changer!
To avoid rework, we tried to synchronize our bugfix and features backlog with the migration. We planned to implement new features preferably in the new code.
Using this pattern, we were able to completely refactor the API in 2 months while fixing the bugs, putting the application in production and delivering new features regularly.
We looked into the different options for migrating monolithic applications to serverless architectures in AWS.
If you wish to keep fast-paced development without delaying the migration, you can use the architectural pattern described above.
There is really nothing specific to Kong with it. You can use it with whatever technical stack you have for your API!
Are you looking for experts in data projects on AWS? Don't hesitate to contact us!