About Author:


Sankeerth Reddy

Sankeerth Reddy

Senior Cloud Solutions Engineer, CloudThat. Sankeerth is an Amazon Certified Solutions Architect with many years of experience working with Amazon Web Services (AWS). He also has certification from MongoDB and specializes in running MongoDB cluster on AWS. He has experience in architecting and managing MongoDB cluster with Multiple Shards on AWS, which does 1000+ requests per second at peak. He currently leads a CloudThat consulting team and has been involved in various large and complex global clients, some of them being in the top 100 most visited sites in USA. He has architected various solutions on AWS, and managed deployments using AWS. In his previous tenure at TCS, he has worked on developing and deploying internal applications on AWS cloud.


IAM @ CloudThat

Posted on October 5, 2015 by Sankeerth Reddy | Comments(0)

Identity and Access Management (IAM) on AWS is an authentication and authorization service to provide access for multiple users to AWS resources. To know about how the permissions work and their hierarchy, check out this link. How to use IAM to achieve hasslefree and maximum control over the users and resources is completely dependent on the organizational usage and is also a continuous learning. Here, I would like to give an insight into how we at CloudThat use IAM on our account, used by all the employees to restrict access and enable cost tracking and management. Before diving into the details, below are few of the limits that one needs to be aware of before working with IAM. Criteria Limit Groups in one AWS Account 100 Users in one AWS Account 5000 Groups a user can be member of 10 Customer Managed Policies for an AWS Account 1000 Versions of managed policies per policy 5 Managed Policies that can be attached to a user, group 10 Size of each managed policy 5120 Characters Inline policies that can be attached to a user, group Unlimited Aggregate inline policy size for a user 2048 Characters Aggregate inline policy size for a group

Continue reading…

IAM Authorization Hierarchy

Posted on May 5, 2015 by Sankeerth Reddy | Comments(2)

Identity and Access Management (IAM) is the service that provides a authentication and authorization control for users and resources in ones AWS account. Although authentication has been quite simple and straightforward, viz. one can create groups, users and credentials for the users and share the same, authorization is quite a trick on IAM. Also, S3 Bucket policies and Bucket ACLs, SNS Topic Policies and SQS Policies are other resource level permissions which work in tandem with IAM permission. While IAM is from a users perspective, others are from a resource perspective. Many a time, I get questions like “Which permission gets higher priority?” or “User is in two groups with contradicting permissions, what would be effective?” or “What happens when no permissions are given?”. This blog is to clear these kind of ambiguity while using IAM, S3 bucket policies or any other mode of permission on AWS for that matter. AWS has always been adopting least permission policy across all the services. Meaning, if no permission about a specific resource is  specified, it would be a “Deny”. Post the initial deny decision, the final decision of deny or allow would be based on “Explicit” allows and denies specified in the

Continue reading…

5 Reasons MongoDB is better than DynamoDB

Posted on May 30, 2014 by Sankeerth Reddy | Comments(1)

With NoSQL databases leading the next generation data stores, MongoDB and DynamoDB are two very viable options for quite a number of usecases. While MongoDB is a open source document store database, DynamoDB is a managed key-value store which is offered as a service by Amazon. MongoDB, being available as open source and has support for various platforms (cloud and in-house), offers higher control over the database when compared to managed datastore like DynamoDB. My colleague wrote and article 5 reasons why DynamoDB is better than MongoDB. In response to that below I give five reasons to choose DynamoDB over MongoDB. Reason 1: It’s all about the data model and indexes Proper indexing is arguably the most important part in a NoSQL database design. MongoDB allows you to have arbitrary index on any of the fields. It also allows you to have many other kind of indexes which include compound indexes, multi-indexes on arrays and geospatial indexes. In version 2.6, MongoDB has support for full text indexes to enable faster text pattern searches. DynamoDB on the other hand provides limited indexing capabilities. Primary key is indexed, and now it allows indexing based on other keys via Local Secondary Indexes (LSI)

Continue reading…

MongoDB Monitoring Service – Installation and Set Up

Posted on May 21, 2014 by Sankeerth Reddy | Comments(0)

MongoDB Monitoring Service or MMS is a free monitoring application developed by the MongoDB team to manage and troubleshoot MongoDB deployments. Once set up correctly, you get a bunch of metrics that can be very useful during troubleshooting production issues. MMS is also used by MongoDB team to provide suggestions and optimization techniques. In this post, I will be briefing about the steps to install and few tips and tricks to setup MMS for one’s MongoDB cluster, without having to spend a lot of time. On the whole, there are two steps to set up MMS for a sharded cluster. 1: Install and start monitoring agent on one of the nodes. Preferably on a mongos machine as it has access to all the nodes ( Shards and Config Servers ) 2: Add the nodes to the MMS Console for monitoring Task 1: 1. Firstly, create an account on mms.mongodb.com and login into MMS. 2. Get the monitoring agent installation instructions on the settings page. 3. Select the platform on which MMS agent needs to be installed to get the corresponding instructions. 4. The API keys would be needed to be updated on the configuration file after installing the agent. The

Continue reading…

Sample Questions for MongoDB Certified DBA (C100DBA) exam – Part II

Posted on April 21, 2014 by Sankeerth Reddy | Comments(8)

Here are some more sample questions for C100DBA: MongoDB Certified DBA Associate Exam. Please give them a try and the answers are at the end of this blog post. If you have not yet attempted Part I of sample questions – they are available here. Section 1: Philosophy & Features: 1. Which of the following are valid json documents? Select all that apply. a. {“name”:”Fred Flintstone”;”occupation”:”Miner”;”wife”:”Wilma”} b. {} c. {“city”:”New York”, “population”, 7999034, boros:{“queens”, “manhattan”, “staten island”, “the bronx”, “brooklyn”}} d. {“a”:1, “b”:{“b”:1, “c”:”foo”, “d”:”bar”, “e”:[1,2,4]}} Section 2: CRUD Operations: 1. Which of the following operators is used to updated a document partially? a. $update b. $set c. $project d. $modify Section 3: Aggregation Framework: Questions 1 to 3 Below is a sample document of “orders” collection { cust_id: “abc123″, ord_date: ISODate(“2012-11-02T17:04:11.102Z”), status: ‘A’, price: 50, items: [ { sku: “xxx”, qty: 25, price: 1 }, { sku: “yyy”, qty: 25, price: 1 } ] } Select operators for the below query to determine the sum of “qty” fields associated with the orders for each “cust_id”. db.orders.aggregate( [ { $OPR1: “$items” }, { $OPR2: { _id: “$cust_id”, qty: { $OPR3: “$items.qty” } } } ] ) 1. OPR1 is a.

Continue reading…

Sample Questions for MongoDB Certified DBA (C100DBA) exam – Part I

Posted on April 5, 2014 by Sankeerth Reddy | Comments(19)

Below are some of the sample questions Sample Questions for C100DBA: MongoDB Certified DBA Associate Exam. You can read more about the MongoDB Certified DBA Exam here. Please give them a try and the answers are at the end of this blog post. Section 1: Philosophy & Features: 1. Which of the following does MongoDB use to provide High Availability and fault tolerance? a. Write Concern b. Replication c. Sharding d. Indexing 2. Which of the following does MongoDB use to provide High Scalability? a. Write Concern b. Replication c. Sharding d. Indexing Section 2: CRUD Operations: 1. Which of the following is a valid insert statement in mongodb? Select all valid. a. db.test.insert({x:2,y:”apple”}) b. db.test.push({x:2,y:”apple”}) c. db.test.insert({“x”:2, “y”:”apple”}) d. db.test.insert({x:2},{y:”apple”}) Section 3: Aggregation Framework: 1. Which of the following is true about aggregation framework? a. A single aggregation framework operator can be used more than once in a query b. Each aggregation operator need to return atleast one of more documents as a result c. Pipeline expressions are stateless except accumulator expressions used with $group operator d.  the aggregate command operates on a multiple collection Section 4: Indexing: Below is a sample document in a given collection test. { a

Continue reading…

Resource level Permission for EC2

Posted on March 27, 2014 by Sankeerth Reddy | Comments(0)

Elastic Cloud Compute has proven to be a flagship service provided by Amazon Web Service where one could allocate and use powerful computing resources (EC2 Instances) with ease. Using EC2, one can perform complex operations like allocating virtual resources for development, testing, production with few clicks. As it is rightly said, with great power comes great responsibility, managing the resources and access to the resources is a vital task. The Identity and Access Management (IAM) module enables one to securely control access to AWS services and resources for users. Using IAM, one can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources. With enterprises opting AWS as their prefered cloud service provider, a higher granularity in user permission has become the need for the hour. For example, one would not want development or testing team to have permission to make changes to production instances which needs to be accessed only by specific trusted administrators. This granularity can be achieved by using Tag based permissions in user IAM access policies. Tag is a simple user defined key-value pair attached to a instance. By default, during launching an EC2 instance, one provides

Continue reading…

Preparing for MongoDB Certified DBA Associate Exam

Posted on March 5, 2014 by Sankeerth Reddy | Comments(17)

MongoDB Inc. has recently released certification program. There was lack of certification in the MongoDB ecosystem and after having worked with MongoDB for few years now, the news about MongoDB certification got me excited. I recently appeared for the MongoDB Certified DBA Associate Exam and am going to share a few details about the exam and my experience. MongoDB website has very sparse information, so I am hoping information here will help fellow exam takers. Why get MongoDB certified now? According to MongoDB – “Certification helps you establish technical credibility and facility with MongoDB and contributes to your organization’s proficiency in running applications on the platform.” MongoDB is growing to be one of the preferred NoSQL databases in the market. Its flexibility in terms of the supported amount of data and ease of horizontal scalability and administration, both on-premise and cloud, is making corporates opt for MongoDB as the preferred next generation database. With growing users of mongodb, this certificate would definitely make you stand out from the crowd. As the certification is available for a few weeks in a year, the number of certified DBAs would also be accordingly less. Hence, it is time to leverage the opportunity and become one

Continue reading…

AWS adds two new Route53 Health Checking features

Posted on February 4, 2014 by Sankeerth Reddy | Comments(0)

AWS recently added two additional features to Route 53 health checks, which are HTTPS support and string matching. Lets see what these features do and how would it help a DevOps Engineer. To begin with, Route53 Health checks have hitherto supported TCP and HTTP end-points. In both the options, Route53 would try to establish a TCP connection which needs to be successfully connected within four seconds. In case of a HTTP end-point, Route53 would expect a HTTP 200 or greater (but less than 400) status code in the response within two seconds after connecting in order to conclude that the resource is healthy. HTTPS Support HTTPS support to Route 53 will simplify resource health check over SSL. Similar to HTTP health check, Route 53 tries to establish a TCP connection to the resource over the port 443(default). Prior to HTTPS support, web servers with SSL enabled had to serve at least one page over HTTP for the health check to pass, but not anymore. Matching String Now in HTTP or HTTPS health check, you can also specify a string which Route 53 needs to look for in the response, in order to conclude that the instance is healthy. I see

Continue reading…