Learning Big Data with Amazon Elastic MapReduce

0

Book Description:

Amazon Elastic MapReduce is a web service used to process and store vast amount of data, and it is one of the largest Hadoop operators in the world. With the increase in the amount of data generated and collected by many businesses and the arrival of cost-effective cloud-based solutions for distributed computing, the feasibility to crunch large amounts of data to get deep insights within a short span of time has increased greatly.

This book will get you started with AWS so that you can quickly create your own account and explore the services provided, many of which you might be delighted to use. This book covers the architectural details of the MapReduce framework, Apache Hadoop, various job models on EMR, how to manage clusters on EMR, and the command-line tools available with EMR. Each chapter builds on the knowledge of the previous one, leading to the final chapter where you will learn about solving a real-world use case using Apache Hadoop and EMR. This book will, therefore, get you up and running with major Big Data technologies quickly and efficiently.

What You Will Learn

  • Create and access your account on AWS and learn about its various services
  • Launch a machine on the cloud infrastructure of AWS, get login credentials, and communicate with that machine
  • Learn about the logical dataflow of MapReduce and how it uses distributed computing effectively
  • Understand the benefits of EMR over a local Hadoop cluster
  • Discover the best practices that should be kept in mind while planning and executing a cluster/job on EMR
  • Launch a cluster on Amazon EMR, submit the Hello World wordcount job for processing, and download and view the results
  • Execute jobs on EMR using the two primary methods provided by EMR




You can also get this PDF by using our Android Mobile App directly:

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.