Scroll to top
Message was sent successfully

Migration into AWS and splitting into the microservices for an online reservation platform

Scroll

Client:  The company provides an online reservation application to its customers

Employees

Above 150

Region

Germany

Industry

Development

Customer since

2017

1

The Challenge

As the number of app users has been steadily growing the development company required a solution to improve the stability of the entire infrastructure and to enhance the speed of realizing to meet the audience demands. The task was assigned with the goal to migrate the monolith Ruby on Rails app from bare-metal servers (about 50 machines) into AWS.

2

The Solution

Preparation Stage

It's been suggested to split the monolith system into a set of microservices (with the use of Docker containers); implement the Continuous Integration / Continuous Delivery solution and to plug the full set of the AWS services needed.

Implementation Stage

The infrastructure inside AWS has been prepared. It included:

  • VPC with public/private networks in different AZs;
  • security groups;
  • DB cluster (RDS Aurora MySQL);
  • the set of ECS Fargate clusters (development/staging/production);
  • at the moment of migration, EKS was not available, so it's been decided to use ECS as a container orchestration;
  • ECR for storing docker images;
  • AWS CodePipeline/CodeBuild/CodeDeploy for CI/CD needs;
  • ALB with target rules for different environments;
  • CloudFront distribution to cache static files and improve TTFB for the clients;
  • Lambda functions for processing app data and API gateway to interact with it.

All parts above were described as Terraform code, stored in a separate repo and deployed using Terraform Enterprise.

  • the existing data from the bare-metal machines have been migrated into AWS;
  • S3 was used to host static files;
  • for the DB data, initially MySQL replication was set up from the existing server into RDS Aurora cluster, and proxySQL was used to split read/write requests;
  • for the Redis data, we used a similar approach as with MySQL (Redis slaves through an intermediate set of proxies);
  • logs and metric collection set up into AWS ELK service;
  • part of live traffic forwarded to the new infrastructure to make sure the system was stable and worked properly;
  • 100% of the traffic was forwarded to the new infrastructure. The old services were shut down.

The cooperation proceeds in the mode of maintenance and further management (modifications of the infrastructure, adding new microservices, monitoring).

3

The Result

The company got the modern reliable infrastructure that was ready for the increasing number of the application users and provided the possibility to solve the issues with the app fast and react to the users feature requests immediately by the regular updating and releasing of the new versions.

Bring us your toughest challenge and we’ll perform you route
to an efficient solution.

    I'm interested in: