AWS Identity & Access Management – Best Practices

Posted on September 9, 2015 by CloudThat | Comments(1)

  Security is a critical aspect for any organization. This blog focuses on the account security measure provided by AWS – IAM. IAM stands for Identity and Access Management and is used for controlling access to AWS services and resources. There are no additional charges for using IAM. For people new to IAM, the basic concepts are: User: A user is similar to a login user in various operating systems like Microsoft Windows. A user can log in to the AWS console using their username and password. In AWS world, this user can be an individual, system or an application requiring access to AWS resources and services. Groups: A group is a collection of users. Instead of assigning similar permissions to multiple users individually, a group can be created with a set of permissions and users can be added to it. The benefit of creating groups is that it simplifies the tasks of managing a large number of users and their permissions. Role: A role is a set of permissions required to make AWS service requests. But this role cannot be directly assigned to a user or group, instead roles can be assumed by a user, an application or an AWS service like EC2 to make service

Continue reading…

Migration of infrastructure from EC2 Classic to EC2 VPC

Posted on September 7, 2015 by CloudThat | Comments(0)

EC2-Classic to EC2-VPC Migration Your AWS account might support both EC2-Classic and EC2-VPC, depending on when the AWS account was created and regions used. AWS accounts created after 2009 do not support EC2-Classic platform environment and have the EC2-VPC environment. EC2-VPC environment has additional advantages over EC2-Classic environment. In terms of security, VPC has Network ACL which can allow or deny access to a particular IP. Also, we can setup openVPN and customer gateway between VPC and on-premises. This blog will tell you how to migrate instances (both EC2 and RDS) from EC2-classic environment to EC2-VPC environment with zero downtime. Let’s assume that I have my application server running in the cloud infrastructure. The following   architecture diagram can represent the infrastructure running in EC2-Classic environment.   As you can see in the diagram, there is a Route53 Entry for www.mysite.com with ‘A’ name record.  There are two app servers running which are under a load balancer which are pointing to the MySQL RDS instance. In order to migrate the above EC2-Classic environment to the EC2-VPC environment without downtime following steps can be used- Creating a Load Balancer inside the VPC. Creating AMI of app server Launching application server into public

Continue reading…

Getting started with Docker on Azure

Posted on September 7, 2015 by CloudThat | Comments(0)

Docker is on the verge of becoming one of the most popular virtualization approaches which uses Linux containers as a way to segregate application data and underlying infrastructure on your shared resources rather than virtual machines. Docker automates the deployment of any application as a portable, self-sufficient container which will run almost anywhere – including Microsoft Azure. Associated with Azure Virtual Machines (VM’s), Microsoft Azure provides VM Extensions which are developed by Microsoft and by other trusted third-party providers. VM extensions enable security, runtime, debugging, management, and other dynamic features by which the productivity of Virtual machines can be exploited. The Azure Virtual Machine Agent is used to install, configure, manage and run VM Extensions. You can configure VM agent and VM extensions either during the VM creation or on an existing VM. This can be done using the Management Portal, PowerShell cmdlets or the xplat-cli. So, using a Docker VM extension and along with the Azure Linux Agent we can create a Docker VM that hosts any number of containers for your applications on Azure. The Docker VM Extension has some very cool features like Docker Hub integration, Docker Compose support and Docker Hub/Registry authentication support. Create Docker VM extension

Continue reading…

Docker: Game changer in the world of Cloud and Virtualization

Posted on September 4, 2015 by CloudThat | Comments(0)

Docker is an open source container service which provides lightweight alternative to heavy and complex virtual machines. It allows developer to package the whole application with its dependencies and stacks into a box which here is referred as a container, which will be in an isolated environment. Each and everything required to run the application will be present inside the box making it independent of the underlying operating system/environment. Docker changes the application deployment scenario completely, helping developers in shipping their code faster, test faster, deploy faster, due to which the time cycle between writing and running the code reduces to a great extent. The tasks which are done by virtual machines can be done using docker with ten times more efficiency, speed and also ten times less resource usage. Docker uses the LXC (Linux Containers) as its base technology. The biggest problems usually developers face is dependency hell, i.e. the application works perfectly fine on dev environment, but when deployed into QA/production environment, the application throws errors and does not work, leaving developers with a huge problem to find cause of the problem and what is it missing exactly. Docker eliminates this problem by providing a complete guarantee of execution,

Continue reading…

Registering and Preparing for the EX210 – Red Hat Certified System Administrator in Red Hat OpenStack Exam

Posted on August 20, 2015 by Sangram Rath | Comments(1)

OpenStack is the most popular private cloud platform available in the market today and Red Hat Enterprise Linux OpenStack Platform is a secure, stable and enterprise-ready offering of OpenStack from Red Hat that has been co-engineered with the OpenStack foundation for performance and hence is one of the most preferred platforms for many enterprises. With most companies looking at adopting OpenStack to tap its benefits, one of the top challenges they face is the lack of OpenStack professionals. Hence, OpenStack skills are in demand more than before. EX210: Red Hat Certified System Administrator in Red Hat OpenStack Exam The EX210 exam is aimed at audiences who want to demonstrate their skills, knowledge, and abilities needed to create, configure, and manage private clouds using Red Hat Enterprise Linux OpenStack Platform and boost their career. The Red Hat OpenStack official training and certification thus helps in providing companies with necessary skilled resources. An IT professional who has earned the Red Hat Certified System Administrator in Red Hat OpenStack is able to: Install and configure Red Hat Enterprise Linux OpenStack Platform Manage users, projects, flavors, and rules Configure and manage images Add compute nodes Manage storage using Swift and Cinder Preparing for the exam: It is recommended to

Continue reading…


Migrate your pre-configured Windows Instance from on-premise or AWS to Azure

Posted on August 20, 2015 by Arman Koradia | Comments(13)

IT industry today is fast paced compared to any other industries in this century. Growth, Upgrade, Mobility, Availability are the buzz words in today’s world. This is the only industry which is more fashion driven than women’s fashion. With the growth of Cloud Computing, people are moving their existing infrastructure to Cloud. Many tech-giants like Amazon, Microsoft, Google, VMWare are offering their services in cloud computing. Amazon Web Services and Microsoft Azure are very frequent names for cloud services. Azure argues that its platform allows businesses to fully take advantage of hybrid cloud platforms. Azure Logic Apps works with your existing assets like legacy softwares, ERP, etc and extend it to Azure cloud. Thus with Azure, extensibility is in your control. Problem Definition: Cloud Consultant always get below requirements: Moving your on premise Windows Server to Microsoft Azure Moving your Amazon EC2 Windows Instance to Microsoft Azure. We have configured SharePoint Server or Exchange Server on AWS or on premise. To setup the same environment on Azure from scratch, might be time consuming or complicated process. So in such cases we will want to migrate preconfigured server to Azure. Solution: To solve the above problem, I’ll take you through the

Continue reading…


Cassandra Multi-AZ Data Replication

Posted on August 19, 2015 by CloudThat | Comments(7)

Apache Cassandra is an open source non-relational/NOSQL database. It is massively scalable and is designed to handle large amounts of data across multiple servers (Here, we shall use Amazon EC2 instances), providing high availability. In this blog, we shall replicate data across nodes running in multiple Availability Zones (AZs) to ensure reliability and fault tolerance. We will also learn how to ensure that the data remains intact even when an entire AZ goes down. The initial setup consists of a Cassandra cluster with 6 nodes with 2 nodes (EC2s) spread across AZ-1a , 2 in AZ-1b and 2 in AZ-1c. Initial Setup: Cassandra Cluster with six nodes. AZ-1a: us-east-1a: Node 1, Node 2 AZ-1b: us-east-1b: Node 3, Node 4 AZ-1c: us-east-1c: Node 5, Node 6 Next, we have to make changes in the Cassandra configuration file. cassandra.yaml file is the main configuration file for Cassandra. We can control how nodes are configured within a cluster, including inter-node communication, data partitioning and replica placement etc., in this config file. The key value which we need to define in the config file in this context is called Snitch. Basically, a snitch indicates as to which Region and Availability zones does each node in the

Continue reading…

Migration from relational database to NoSQL database

Posted on August 19, 2015 by CloudThat | Comments(0)

Do we really need NoSQL databases? Relational database model was proposed in 1970, since then we are using RDBMS for most of the applications. But this model is having a hard time keeping pace with the volume, velocity, and variety of data. To keep pace with growing data storage needs, NoSQL databases were introduced in which the focus has shifted from relationships in data, to have a scalable solution to store large volumes of data. Relational databases focus on ACID (Atomicity, Consistency, Isolation, and Durability) property whereas NoSQL focus on CAPs (Consistency, Availability, and Partition tolerance) theorem. Market share of Relational and NoSQL database The chart shows the Relational databases were used 51% whereas NoSQL databases were used 49% during the period of March 2013 – 2014 During period of March 2014 – 2015 the use of NoSQL databases increased to 59% whereas the use of Relational database reduced to 41% Advantages of NoSQL Database Fast Each table in NoSQL is independent of the other. NoSQL provides us the ability to scale the tables horizontally, so we can store frequently required information in one table. All the table joins need to be handled at application level. Thus, data retrieval is

Continue reading…


Implementation of Custom Cloudwatch Metric in Amazon Web Services

Posted on August 19, 2015 by Ravi Theja | Comments(4)

What are metrics? Metrics are used to monitor various resources in Amazon Web Services like EBS volumes, EC2 instances and RDS instances. Apart from the pre-defined metrics in AWS, sometimes monitoring is required for additional service parameters. For creating a user defined metric, AWS has introduced custom metrics in CloudWatch. This feature can be used to store business and application metrics in Amazon CloudWatch.  Also, a graph plot is generated to give a visual interpretation of the metric. Custom metrics can be setup on almost all services such as Amazon EC2 instances, AWS Billing, Autoscaling, EBS volumes and Relational Database Services. In addition, CloudWatch alarms can be set and automated actions can be configured on the basis of these metrics.   Why use Custom metrics? Amazon Web Services provides a wide range of metrics for EBS, EC2 and RDS services. Alarms can be configured on the basis of these metrics and actions like terminating EC2 instances, restarting them or sending messages to SQS can then be taken. Each of these services have different metrics associated with them and Cloudwatch can measure metrics provided by the hypervisor like disk-read, disk-write and disk usage for EBS and CPU Utilization, CPU credit Usage.

Continue reading…

Integrating AWS API Gateway, Lambda and DynamoDB

Posted on August 19, 2015 by Bhavik Joshi | Comments(23)

In the current world of API, every mobile application and website have to communicate using dedicated API servers. These dedicated servers are explicitly set to handle the API calls for an application. API servers act as an intermediary between the application and the database. The bottleneck of this setup is that the API server has to be maintained to handle all the API calls. The increase in the number of API calls, increases the load of the API server which may require auto-scaling, which is cost-consuming.   The latest approach of the best architects is to utilize a new AWS service that explicitly replaces the need for a dedicated API Server. AWS API Gateway provides the ability to act as an interface between application and database which uses AWS Lambda function as the backend.     To get the essence of AWS API Gateway, we need to get hands-on with it. The next part of the blog is a detailed tutorial on how to use AWS API Gateway along with AWS Lambda & DynamoDB. People who are familiar with DynamoDB, API Gateway and Lambda can proceed with the high-level instructions. Also for people who are new to these services, there

Continue reading…