Why do we opt for AWS Mumbai Region as an Indian cloud customer?

Posted on September 9, 2016 by Amrendra Kumar | Comments(0)

Are you an Indian AWS (Amazon Web Services) Cloud Customer? Do you want to adopt AWS Mumbai Region for your production? Then we have an analysis for you which will help you to decide whether to go for AWS Mumbai Region or not. In this blog, we’ll look at aspects like “cost comparison, services availability, S3 object download speed test, latency check, compliance” between near and famous regions. For those who are not familiar to AWS, a region in AWS is a term which describes a set of AWS resources within a geographic area. Each region contains multiple, isolated locations calledAvailability Zones (AZ). In June 2016, AWS announced Asia Pacific (Mumbai) Region as 13th AWS Region. Now, AWS provides 35 AZs across 13 regions globally. There are more than 75,000 India-based customers who are already using other AWS Regions to save costs, accelerate innovation, and widen their geographic reach in minutes. What does Mumbai region mean to all the hitherto existing customers or new customer? Mumbai Region allows global and India-based developers, start-ups, enterprises, government organisations and non-profits to leverage the AWS Cloud to run their applications from India and provide even lower latency to India-based end users. Two separate

Continue reading…

Amazon expands Elastic Block Storage

Posted on March 24, 2015 by CloudThat | Comments(1)

As we know Elastic Block Store (EBS) is the block level storage that increases the durability and persistence of data for EC2 instances. It earlier provided storage size from 1GB up to 1TB. Last week, Amazon announced increased size and IOPS for EBS. Larger and faster volumes of up to 16TB are available now in all commercial AWS regions and in AWS GovCloud (US). Also, SSD EBS volumes now get 99.999% availability. Before this release, EBS just mentioned 10x durability compared to normal harddrive, but nothing on availability. Before this release, applications that required storage capacity of more than 1TB used multiple EBS volumes attached to an instance, and various striping techniques like software RAID 0 and RAID 5 to combine those volumes into a single logical drive. In Linux, LVM was used to combine multiple EBS volumes into one logical volume. There were multiple issues with EBS volume striping: Disk performance can be increased by striping EBS volumes together using RAID 0. This increases IO throughput if there is a need for higher IOPS and not the disk size. But this may not be the better alternative, as if any of the underlying EBS volume fails, it will fail

Continue reading…

Preparing for Amazon Web Services Certified Solutions Architect Certification (AWS Architect Certification)

Posted on February 27, 2015 by Bhavesh Goswami | Comments(21)

AWS has launched a comprehensive AWS certification program. Lack of Certification from AWS was an issue in the AWS ecosystem, so the certification program is really exciting. I registered for the exam straight away and gave the exam today (05/08/2013). I am glad to say I cleared the exam for AWS Certified Solutions Architecture, and I am sharing my experiences. Types of AWS Certification Under the AWS certification program AWS will provide three kinds of Certifications: AWS Certified Solutions Architect AWS Certified SysOps Administrator AWS Certified Developer                       Under each of these different verticals AWS will have three different levels in future: 1) Associate, 2) Professional, and 3) Master. For now, AWS has only offered the AWS Certified Solutions Architect certification and only for Associate level. AWS says that the rest of the certifications are coming later. Update: Since the initial blog post, Amazon now has released all the above three certification in associate level. Also Professional Level Exam is now launched for Solutions Architect and DevOps. For more information on Masters courses, please click on respective links. What does AWS Solutions Architect Certification consists of? According to AWS: “Earning an AWS Certified

Continue reading…

Steps to convert RHEL-BASED PV instances to HVM

Posted on November 5, 2014 by CloudThat | Comments(1)

My previous blog will give you an idea as to why HVM instances in AWS are better than PV instances and why people prefer using HVM instances over PV instances. Moreover, the HVM instances offered by Amazon are cheaper than the PV ones. For example, PV m1.medium costs $0.087 per Hour whereas, HVM m3.medium costs $0.070 per Hour. Inspite of the reduced price, we get same memory plus one more extra core with HVM. If you have a PV machine, you should definitely convert it to HVM because of the lower cost and also, PV instances might retire eventually. Here are the detailed steps for doing so: These steps are strictly for RHEL based PV instances like Amazon Linux, CentOS, RedHat etc. 1. Login to your instance (SSH) . 2. Install grub on it. *Grub selects a specific kernel configuration available on a particular operating system’s partitions. 3. Stop the instance and create snapshot of the root volume. *To overcome the downtime here, you can alternatively create an image of your instance and launch another machine. 4.  Launch an Amazon Linux ‘working‘ instance (PV) and login to that instance. 5. Create a new ‘source’ volume (in the same

Continue reading…

Introducing new AWS Feature: CloudWatch Logs

Posted on August 5, 2014 by CloudThat | Comments(1)

In the recently organized AWS Summit in New York, a new extension service to CloudWatch called CloudWatch Logs has been added to the AWS services catalog. Earlierly CloudWatch was only monitoring resource utilization so to monitor application level logs we have to opt for third party tools. With CloudWatch Log service, one can upload and monitor various kinds of log files and even filter the logs for particular pattern which could help resolve various production issues like an invalid user trying to login to your application, a 404 page not found error or a bot attempting a denial-of-service-attack. So now, along with monitoring many other AWS services like EBS, EC2, RDS etc. , CloudWatch can monitor and store application logs, system logs, webserver logs and other custom logs. By setting alarm on these metrics, one can also get notified about app/webserver level issues and can take necessary actions with least delay. Why CloudWatch Logs? There are already some services like Splunk, Loggly and Logstash which monitor the logs and provide custom detailed reports. CloudWatch Logs seems pretty basic at this point but one wouldn’t be surprised if Amazon adds more features soon. What makes CloudWatch Logs more preferable over other third

Continue reading…

New Service Alert: Windows Azure ExpressRoute

Posted on April 23, 2014 by Sangram Rath | Comments(2)

Currently in public preview & available only in the US, Windows Azure ExpressRoute is a service that provides a dedicated fast private connection between your on-premise datacenter and Azure datacenter. Benefits: Since connections do not go over public internet, you get a connection that is Secure Reliable Fast Lower Latency Source: https://azure.microsoft.com/en-us/services/expressroute/ How To: There are two ways one can establish an ExpressRoute connection with Azure. Directly using WAN – where your on-premise or co-located infrastructure directly connects to Azure datacenters through a WAN network. Through an ExpressRoute Location – here an exchange provider facilitates the connection between your datacenter to Azure. Suited in scenarios where a direct WAN connection to Azure is not possible. AT&T and Equinix are the two partners providing ExpressRoute service. Use Cases: Achieve a natural extension feel to your datacenter due to low latencies and better throughputs Move large volumes of data like VMs, datasets & old data to Azure quickly Greater high availability & disaster recovery times Applications spread across on-premise & cloud that require secure connections & high performance   Read more FAQs about the service @ https://msdn.microsoft.com/en-us/library/azure/dn606292.aspx   If you are interested in learning more about Windows Azure & its services, CloudThat offers

Continue reading…


1000 jobs for BigData Analytics posted in 1 week !!

Posted on February 20, 2014 by CloudThat | Comments(2)

I teach a BigData Analytics course in Bangalore and I routinely check up for jobs that exist in this domain on Naukri.com. (Naukri.com is the no.1 job site in the country). You must be hearing about BigData , Cloud technologies and Analytics being the ‘hottest’ jobs of the century. Quoting Harvard Business Review ,” Data Scientist: The Sexiest Job of the 21st Century. So who is a Data Scientist ? It’s a high-ranking professional with the training and curiosity to make discoveries in the world of big data. The title has been around for only a few years. (It was coined in 2008 by one of us, D.J. Patil, and Jeff Hammerbacher, then the respective leads of data and analytics efforts at LinkedIn and Facebook.) But thousands of data scientists are already working at both start-ups and well-established companies. Their sudden appearance on the business scene reflects the fact that companies are now wrestling with information that comes in varieties and volumes never encountered before. If your organization stores multiple petabytes of data, if the information most critical to your business resides in forms other than rows and columns of numbers, or if answering your biggest question would involve a

Continue reading…

AWS adds two new Route53 Health Checking features

Posted on February 4, 2014 by Sankeerth Reddy | Comments(0)

AWS recently added two additional features to Route 53 health checks, which are HTTPS support and string matching. Lets see what these features do and how would it help a DevOps Engineer. To begin with, Route53 Health checks have hitherto supported TCP and HTTP end-points. In both the options, Route53 would try to establish a TCP connection which needs to be successfully connected within four seconds. In case of a HTTP end-point, Route53 would expect a HTTP 200 or greater (but less than 400) status code in the response within two seconds after connecting in order to conclude that the resource is healthy. HTTPS Support HTTPS support to Route 53 will simplify resource health check over SSL. Similar to HTTP health check, Route 53 tries to establish a TCP connection to the resource over the port 443(default). Prior to HTTPS support, web servers with SSL enabled had to serve at least one page over HTTP for the health check to pass, but not anymore. Matching String Now in HTTP or HTTPS health check, you can also specify a string which Route 53 needs to look for in the response, in order to conclude that the instance is healthy. I see

Continue reading…

New AWS Feature: Amazon RDS now support cross-region replication

Posted on November 29, 2013 by Bhavesh Goswami | Comments(0)

Amazon Relational Database Service (RDS) now support launching read replica in another region with a few clicks. THIS IS HUGE!!! This is the first high level feature that AWS supports across regions. This is the great first step from AWS towards providing better support for cross-region infrastructure setup. Until now replicas were only allowed in the same Region. And as RDS instances only allow access to the database application and not the server, it was impossible to configure read replicas in another region until now with RDS. With the latest feature, AWS users can, with a few clicks, start read replicas in another region. This feature is really really important for many use cases: Disaster recovery Although its is very rare for an entire AWS region to go down, it does happen. Many enterprises want to replicate their databases across regions, so that when a catastrophy does occur and the primary region goes down, infrastructure can be quickly setup in another region. Such setup requires database being synced across regions and until now such deployment could not use RDS. Now they can. Active-Active cross-region deployments This use case was for active-active deployments that span across regions. For example, an e-commerce

Continue reading…

Updates from AWS re:Invent 2013

Posted on November 13, 2013 by Bhavesh Goswami | Comments(0)

This blog post will follow news and events from the AWS re:Invent. re:Invent 2013 is the second annual conference by AWS and is being received overwhelmingly by about 9000 people attending it in Las Vegas. Day 1 Keynote brought out some really interesting facts which very well demonstrate how AWS has positioned itself as a leading provider of Infrastructure Services by enabling its clients to think big & achieve higher goals. Here are some of the updates : According to a research report by Gartner, AWS has five times the combined capacity of its next 14 rivals. AWS has reportedly helped its customers with whopping $140M in annualized savings though their Trusted Advisor Service. Customers who have migrated their applications to cloud are reporting 32% reduction in total application downtime. One of AWS’s customers “Bristl-Mayes Squibb” which is a pharmaceutical company  migrated their clinical trials simulation platform to cloud thus performing their simulations in 1.2 hours which would have taken 60 hours instead. That too with 64% reduction in cost. AWS has also launched the following services. AWS CloudTrail AWS gave developers their Christmas gift today in form of a service AWS CloudTrail. A big pain point on AWS what

Continue reading…