Tutorial: Introduction to Apache Hadoop - Seattle, WA
Thursday, July 19, 2012 from 1:00 PM to 5:00 PM (PDT)
This half-day session takes place July 19, 2012. This tutorial is for anyone wishing to gain a grander understanding of Apache Hadoop. The tutorial will help answer the questions:
What is Hadoop?
- When is Hadoop appropriate?
- What are people using Hadoop for?
- How does Hadoop fit into our existing environment?
Who should attend?
This session is appropriate for developers, system administrators, managers, architects, or anyone who wants to jump start their adventure with Hadoop. Prior Hadoop knowledge is not required.
What will the tutorial cover?
The motivation for Hadoop
What problems exist with ‘traditional’ large-scale computing systems
How Hadoop is different
Basic concepts of Hadoop
What Hadoop is
What features the Hadoop Distributed File System (HDFS) provides
The concepts behind MapReduce
How a Hadoop cluster operates
The most common problems Hadoop can solve
The types of analytics often performed with Hadoop
Where the data comes from
The benefits of analyzing data with Hadoop
Real-world Hadoop use cases
The Hadoop Ecosystem
What other projects exist around core Hadoop
The differences between Hive and Pig
When to use HBase
How Flume and Sqoop are used to injest data into a Hadoop cluster
When & Where
Cloudera brings Hadoop to enterprise users. We provide a certified distribution based on the most recent stable release from Apache, online and live training, as well as commercial support.