Cloud Computing

5 Mins Read

5 Tech Areas that are Changed Forever after AWS re:Invent 2016

Over the years, AWS re:Invent’s announcements have always surpassed customers’ expectations. This year we can agree that they have crossed every customers’ imagination. Here are 5 tech areas which AWS has revolutionized with the announcements / partnerships this year.

 

1. Ease and Volume of Migration to AWS

Although customers love to move data to AWS through its pricing model and its services, until re:Invent 2016 it was almost impossible to have Exabyte (1000 Petabytes) scale data migration in and out of AWS.

A single Snowball device would only transfer 80 TB at a time. Lot of enterprises need to opt for multiple Snowball devices to be able to move all the data to AWS. In this case, size would definitely matter. This is now fixed by AWS Snowmobile service. The announcement of AWS Snowmobile service has been the highlight of Andy Jassy’s (CEO, AWS) keynote speech. Andy had reserved the announcement to the last 5 minutes of his 2 hour keynote.

There it was, a huge truck entering into the keynote area. With a capacity to move 100 Petabytes of data on premise to AWS and vice versa, AWS Snowmobile would be world’s largest storage transfer device. Now, you can move up to 100 Petabytes of data to AWS within days of planning as there could be an AWS Snowmobile arriving at your datacenter. Thus, you can now move Exabyte scale data in and out of AWS.

Also, AWS announced Snowball Edge which has compute and clustering capabilities to be able to move 100 TB of data with S3 integration. As storage of Snowball Edge comes with compute, you can plug it anywhere, where your data is generated and perform first level of processing while data is being collected. Further processing can be done when Snowball data is moved to AWS.

Andy also announced PostgreSQL support for Aurora, which would help many PostgreSQL users leverage the availability and scalability of Aurora. Using the DB Migration service, customers can now get their on premise PostgreSQL databases to Aurora with ease.

AWS and VMware are also partnering to bring forth new collaboration rather than competition. This partnership would make it easier for current VMware private cloud customers to be able to integrate or rather migrate, to and back from AWS.

 

  • Cloud Migration
  • Devops
  • AIML & IoT
Know More

2. Internet of Things (IoT) Backbone

There was a lot of push towards Internet of Things and connected devices with AWS launching a very interesting service called AWS Greengrass. AWS Greengrass lets you compute directly on IoT devices by running AWS Lambda code locally on the device. This is very useful in the scenario when internet connectivity of the device is not consistent. When the connectivity is resumed, Greengrass communicates with AWS for persistent data storage with its built-in messaging and security modules. This concept is a step in the direction of Fog computing that is being pushed by Cisco and some other vendors.

Also, multiple devices in a Greengrass group can communicate with each other and exchange data even when they don’t have internet connection.

With all these features, developers will come up with new and innovative ways on how devices can interact with the cloud.

 

3. Artificial Intelligence and Machine Learning

Last year, AWS announced Machine Learning service thus enabling developers to use it for multiple use cases like recommendation engine, predictive analytics, data classification, etc. This year at re:Invent, AWS went a step further by announcing three new services that widely increase the presence of AWS in the space of Artificial Intelligence and Cognitive Sciences.

AWS Rekognition: An image recognition service where if you provide an image, it’ll analyze the image and will suggest some keywords based on the content of the image. It also provides the confidence level around those keywords that it is suggesting.

Amazon Polly: A text to speech synthesizer, that can speak 24 languages in a human like voice. It provides developers an easy, fast and cost effective way to convert text to speech which they can use to create multiple type of applications like IVR, etc.

Amazon Lex: The engine that powers the speech understanding and conversation interaction of Alexa (voice bot by AWS). Speech recognition and natural language understanding is now just an API call away.

With the combination of above services, AWS took a big leap towards Artificial Intelligence covering almost everything required to build intelligent systems.

 

4. Serverless Development and Continuous Deployments

Within a year of launching, Lambda has become one of most widely used services on AWS. This year again, AWS has not left this card unturned. Lambda has also seen few updates, expanding its scope and performance.

Lambda is now integrated tightly with other AWS services by adding VPC support and API Gateway binary support. One can also set environment variables and can design complete serverless application with Simple Proxy at hand. Along with node 4.3 support, Lambda now supports C# too bringing .NET developers also into the fold.

AWS also announced a new service or rather an extension to Lambda, viz Lambda@Edge. This service brings Lambda function executions right to the edge location near the clients, thereby making the executions faster and responsive. Lambda can now be used as an extensibility mechanism with Amazon Lex, Greengass and Alexa Skills. Look out for our Senior Consultant, Himanshu Sachdeva’s updates on his experiments with Amazon Echo Dot, Alexa and Lambda.

The continuous integration family has also seen a new service included, making AWS a complete CI & CD capable platform. AWS Build is the new service added to the family of CodeCommit, CodePipeline and CodeDeploy. The family that was missing a build tool is now complete for deployments.

Code analysis has also seen a new service this year with AWS X-Ray. This service can be used to get a deeper insight into production code, making it simpler to debug distributed applications in production.

 

5. Analytics Ecosystem

Although Amazon has been quite prudent in the kind of services they offer in the analytics space, this year, AWS has announced major updates that will help both developers and stakeholders with code and results respectively.

AWS CTO Werner Vogels started with describing the 10 major stages of Modern Data Architecture, which include:

  1. Data Ingestion
  2. Preservation of Original Data Source
  3. LifeCycle Management and Cold Storage
  4. Metadata Capture
  5. Managing Governance, Security and Privacy
  6. Self-Service Discovery, Search and Access
  7. Managing Data Quality
  8. Preparing for Analytics
  9. Orchestration and Job Scheduling
  10. Capturing Data Change

AWS has multiple services for many of the above stages including Data ingestion to various storage options, durability of data at the sources, cold storage, security, governance and managing data quality. Werner rightly pointed out that there are multiple pieces missing and showed the need for a glue to bring together multiple pieces. He then went on to introduce a new service called AWS Glue.

AWS Glue is a fully managed Data Catalog and ETL service. One can now also transform data and schedule jobs to extract, load and look for changes in the original data in the original data source. AWS Glue is currently integrated with S3, RDS, Redshift and any JDBC Compliant Data Source.

AWS Athena is another service in this space which enables querying on S3 objects. AWS Athena makes it easy to query data on S3 using well known SQL and one pays only for the queries and not the execution environment. Without worrying about infrastructure, now customers can query S3 data instantly and get results in a matter of seconds.

Lastly, Werner also announced AWS Batch which is a fully managed batch processing at any scale. Along with priority based querying and queuing, AWS Batch dynamically scales the infrastructure needed for batch processing and used spot fleet to be cost effective.

AWS re:Invent 2016 might very well be one of biggest for AWS, for they have released services across the breath of service family that AWS has been offering. The new services have filled a lot of gaps, improved application performance, enhanced security and eased development, deployment, management and much more. While we start experimenting, we will try and bring you complete insight into the new services as we go along. Keep a watch at this space and feel free to comment below.

Get your new hires billable within 1-60 days. Experience our Capability Development Framework today.

  • Cloud Training
  • Customized Training
  • Experiential Learning
Read More

WRITTEN BY CloudThat

Share

Comments

  1. Martin Victor James

    Mar 14, 2017

    Reply

    Thanks for sharing such wonderful information on AWS cloud services. Going through your post has given me a deep knowledge on AWS cloud computing and it’s functioning and knowledge on how it differs from the present existing cloud-based services.

  2. Siddiqi

    Jan 30, 2017

    Reply

    Thanks Sankeerth,Very Informative & useful article.looking forward to similar Updates.

    • Sankeerth Reddy

      Feb 1, 2017

      Reply

      Thank You Siddiqui, will definitely do that. Please stay connected to our blog.

  3. Ondrej

    Dec 22, 2016

    Reply

    nice article, however you should fix this error about the unit in the 3rd paragraph
    “A single Snowball device would only transfer 80 Petabytes at a time”
    snowball can transfer up to 80TB. 🙂

    • Sankeerth Reddy

      Dec 24, 2016

      Reply

      Thanks for pointing it out Ondrej. Updated it now. Didn’t mean to write Petabyte after saying “Only” 🙂

    Click to Comment

Get The Most Out Of Us

Our support doesn't end here. We have monthly newsletters, study guides, practice questions, and more to assist you in upgrading your cloud career. Subscribe to get them all!