Strategizing Effective Cloud Migrations with an Example Case Study

December 20, 2021 | Comments(0) |

According to Gartner, by the end of 2021, 70% of organizations worldwide have migrated at least some workloads to the public cloud. Also, Gartner has identified 6 important factors that can derail cloud migration strategies of companies have been identified and they are: Choosing a wrong cloud migration ally, rushed application assessments, setting the wrong emphasis, poor landing zone designs, dependency bottlenecks, and hidden indirect costs. This blog post decodes how CloudThat has helped its clients to ensure a seamless and cost-effective cloud migration and helped them to meet their business objectives.

TABLE OF CONTENT
Introduction
Customer Challenge
Assessment Process Employed
Business Objectives Identified
Our Proposed Solution
Role of AWS in the Proposed Solution
The Project Outcomes
Architecture Diagram and Design Used
Conclusion

Introduction

As organizations scale, customer demand increases rapidly. To meet the increasing demands, organizations tend to opt for newer and advanced cloud technologies. Cloud Service providers are competing today to provide improved reliability and customer experience through their software applications. While adopting new cloud technologies is a challenge, adhering to the budget without compromising the security posture can prove to be next to impossible without an expert.  

CloudThat offers consulting and system integration services to our clients along with cloud migration manages services, and Well-Architected Review (WAR). CloudThat ensures cloud-delivered systems adhere to the security requirements and compliance standards expected by the customers in a multi-cloud environment. 

Among our various global clients,  A  client who offers a data platform for digital publishers approached us to migrate their infrastructure along with data from Google Cloud Platform (GCP) to Amazon Web Services (AWS). Let us dive deep into the details about the customer challenges, migration process, proposed solutions, outcomes, and architectural diagram description.

Customer Background

Our client is a performance data platform for digital publishers that helps marquee publishers like Futbol Sites/BolaVIPCarousell, TSM Games/BlitZLaOpinionElDiario, 1Weather, Times Internet – GaanaInMobiFrontStorySonyLIV, Zee5, Digit & eBay with use cases around analytics & optimization to help them manage & boost their ad-revenues. They equip their customers with real-time data and insights to manage and accelerate revenue growth.

Customer Challenge

The client’s requirements were to migrate their Infrastructure along with data from Google Cloud Platform to AWSAll the applications and services which belong to GCP should be migrated to AWSTo meet the increasing customer demand, the client wanted to migrate their application running on GCP to the AWS cloud and manage the infrastructure efficiently. The client required a robust and world-class infrastructure deployed on the AWS cloud platform to mitigate the major challenges. Managing the current AWS resources and creating new resources with standard practices was a significant challenge for the client. The entire engineering process needed to be changed to improve reliability and customer experience. The focus was on cost optimization, fault-tolerant, and HA (highly available) applications hosted on AWS cloud for a successful IT transformation for the client’s business environment.  

Assessment Process Employed

Our client was keen on building a robust cloud migration methodology to improve reliability and performance, operate more securely, optimize costs, and automate security and thus improve the overall security posture.  Their existing used Dataflow for Batch and Streaming jobs where Batch jobs were infrequent running day by day in Dataflow and Streaming job was running for previous 160 days. They inserted data streams during the transformation from Cloud Pub/Sub to Big Query streaming jobs. We assessed and suggested to them to use the service from the Kinesis data family for data streaming and batch jobs.  

They employed BigQuery for querying on 50 TB of their data and for performing streaming inserts coming from Dataflow. And then that data was queried on the basis of business logic required from Big Query and sent to the application.  

Though for BigQuery we have two options Athena or Redshift in AWS as per the processing requirements. Eventually, we zeroed down on Athena as it will be cheaper than Redshift providing cost optimization benefits.

Business Objectives Identified

  • By migrating from GCP to AWS cloud, the customer will be able to avail the value benefits of AWS services and have a secure and reliable cloud computing platform 
  • Use of AWS Cloud services for their Analytics and Application requirements 
  • Reduce data storage cost 
  • Optimize the performance of data analytics.

Our Proposed Solution

  • Setup of highly available and scalable application for serving the massive traffic with the help of EC2, ALB, CloudFront 
  • Setup of WordPress application on AWS LightSail 
  • Setup of RDS DB 
  • Configure SNS and SQS for all the topics and subscriptions that belong to Google Pub-Sub. 
  • Migrated GCS data to AWS S3. 
  • Configure Glue Crawlers on S3 to create the databases and tables for Athena.

Role of AWS in the Proposed Solution

We extensively used employed AWS features and services in deploying the solution. Important ones include: Amazon Athena, AWS GlueAmazon S3, Amazon SNS, Amazon SQSAmazon EC2AWS Elastic Load BalancingAmazon CloudFrontAWS CloudWatch, Amazon RDS, and, Amazon QuickSight

The Project Outcomes

  • Cost optimization is achieved by minimizing the service bill that was expensive prior to migration   
  • The Core Application is Highly Scalable and Available running on an EC2 server with Load balancing and CDN in place 
  • We have migrated GCS buckets to Amazon S3 and used AWS Glue and Athena to work seamlessly 
  • The client has started using AWS services for their analytics, such as Amazon Athena, AWS Glue  
  • The database tables get updated on an hourly basis 
  • The Client application querying on S3 using Amazon Athena, thus helping for cost and performance optimization 
  • All the data is archived and stored as backup. 

Architecture Diagram and Design Used

  1. AWS Architecture Diagram and Design

AWS EC2 instances were used to run and serve the application using the public-facing Application Load balancers and CloudFront as a caching layer. Amazon S3 was used for storing static data and serving that data to Glue Crawlers which is then queried by AWS Athena service. Amazon SNS is used as a publisher to send messages according to the required topics and Amazon SQS is used as a subscriber to send and process data as per the requirements by different application services. We employed Amazon QuickSight to provide BI dashboards and analytics.

2. AWS Architecture Diagram

AWS Lambda is employed to run cron tasks on the tables (AWS Athena) which stores the resultant data on S3. Those S3 insights were processed and served using an Application Load balancer with CloudFront caching.

3. Automation of serving Athena tables with data of previous 30 days from S3

  1. Deleting partition from the tables which are older than 30 days
  2. Moving the data files that are deleted from the table to different S3 bucket

4. Highly available and Scalable Architecture and Automation of Athena table Updating Process:

  1. The web layer is exposed to the internet via the CloudFront caching layer and public-facing internal LoadBalancer. Internal applications run on the EC2 server fleets which are highly scalable and highly available.
  2. AWS EC2: Serving and writing logs data to S3 every hour.
  3. Lambda function 1 : XYZ-Production-Job.
  4. Triggered on S3 push event.
  5. Start Glue Crawlers according to the name of the table.

Automation Architecture Diagram

5. Automation of Table Deletion from the Temp Database

  1. CloudWatch Event Rule: trigger the Lambda function daily at 11:30
  2. Lambda function 1: Delete-Temp-Tables
  3. Delete and recreate the temp database

Serverless Automation for deleting temporary database

Conclusion

Cloud migration involves capital risks, budget issues, disaster recovery strategy, and many other aspects. During the process, the security posture can be vulnerable and unstable which could lead to potential adversaries. It is recommended to choose an expert third-party vendor to manage your data migration requirements.

CloudThat is Microsoft Gold Partner, AWS Advanced Consulting Partner, and a Google Cloud Partner and has successfully led many migration projects for our esteemed clients. Get in touch with us for quick results.
Feel free to drop a comment or any queries that you have regarding cloud migration, and we will get back to you quickly. To get started, go through our Expert Advisory page and Managed Services Package that is CloudThat’s offerings. You can easily get in touch with our highly accomplished team of experts to carry out your migration needs.

Learn more about Cloud Migration Methodology and implementation here: 5 Key Cloud Migration Challenges and Their Proven Solutions

Feel free to drop a comment or any queries that you have regarding cloud migration, and we will get back to you quickly.


Leave a Reply