Video description
Apache Hadoop is a freely available open source tool-set that
enables big data analysis. This Hadoop Fundamentals LiveLessons
tutorial demonstrates the core components of Hadoop including
Hadoop Distriuted File Systems (HDFS) and MapReduce. In addition,
the tutorial demonstrates how to use Hadoop at several levels
including the native Java interface, C++ pipes, and the universal
streaming program interface. Examples of how to use high level
tools include the Pig scripting language and the Hive 'SQL like'
interface. Finally, the steps for installing Hadoop on a desktop
virtual machine, in a Cloud environment, and on a local stand-alone
cluster are presented. Topics covered in this tutorial apply to
Hadoop version 2 (i.e., MR2 or Yarn).
About the Author:
Douglas Eadline, PhD, began his career as a practitioner and a
chronicler of the Linux Cluster HPC revolution and now documents
big data analytics. Starting with the first Beowulf How To
document, Dr. Eadline has written hundreds of articles, white
papers, and instructional documents covering virtually all aspects
of HPC computing. Prior to starting and editing the popular
ClusterMonkey.net web site in 2005, he served as Editorinchief
for ClusterWorld Magazine, and was Senior HPC Editor for Linux
Magazine. Currently, he is a consultant to the HPC industry and
writes a monthly column in HPC Admin Magazine. Both clients and
readers have recognized Dr. Eadline's ability to present a
"technological value proposition" in a clear and accurate style. He
has practical hands on experience in many aspects of HPC including,
hardware and software design, benchmarking, storage, GPU, cloud,
and parallel computing.
Table of Contents
Introduction
Hadoop Fundamentals LiveLessons: Introduction
Lesson 1: Background Concepts
Learning objectives
1.1 Understand the problem Hadoop solves
1.2 Understand the Hadoop Version 1 approach
1.3 Understand the Hadoop Version 2 approach
1.4 Understand the Hadoop Project
Lesson 2: Running Hadoop on a Desktop or Laptop
Learning objectives
2.1 Install Hortonworks HDP 2.1 Sandbox
2.2 Install from Apache Hadoop sources
Lesson 3: The Hadoop Distributed File System
Learning objectives
3.1 Understand HDFS basics
3.2 Use HDFS tools and do administration
3.3 Use HDFS in programs
3.4 Utilize additional features of HDFS
Lesson 4: Hadoop MapReduce
Learning objectives
4.1 Understand the MapReduce paradigm
4.2 Develop and run a Java MapReduce application
4.3 Understand how MapReduce works
Lesson 5: Hadoop Examples
Learning objectives
5.1 Use the Streaming Interface
5.2 Use the Pipes Interface
5.3 Run the Hadoop grep example
5.4 Debug MapReduce
5.5 Understand Hadoop Version 2 MapReduce
5.6 Use Hadoop Version 2 features—Part 1
5.6 Use Hadoop Version 2 features—Part 2
Lesson 6: Higher Level Tools
Learning objectives
6.1 Use Pig
6.2 Use Hive
6.3 Demonstrate an Apache Flume example—Part 1
6.3 Demonstrate an Apache Flume example—Part 2
6.4 Demonstrate an Apache Sqoop example—Part 1
6.4 Demonstrate an Apache Sqoop example—Part 2
6.5 Demonstrate an Apache Oozie example—Part 1
6.5 Demonstrate an Apache Oozie example—Part 2
Lesson 7: Setting Up Hadoop in the Cloud
Learning objectives
7.1 Use Whirr to launch Hadoop in the Cloud
Lesson 8: Set Up Hadoop on a Local Cluster
Learning objectives
8.1 Specify and prepare servers
8.2 Install and configure Hadoop Core
8.3 Install and configure Pig and Hive
8.4 Install and configure Ganglia
8.5 Perform simple administration and monitoring
8.6 Install and configure Hadoop using Ambari
8.7 Perform simple administration and monitoring with Ambari—Part 1
8.7 Perform simple administration and monitoring with Ambari—Part 2
Summary
Hadoop Fundamentals LiveLessons: Summary