Cloudera Data Analyst Training: Using Pig, Hive and Impala with Hadoop

Overview

Cloudera University’s four-day data analyst course is for anyone who wants to access, manipulate, transform, and analyze massive data sets in the Hadoop cluster using SQL and familiar scripting languages. This is the core curriculum in the data analyst learning path.

Cloudera University’s Data Analyst Training course focuses on Apache Pig, Apache Hive, and Apache Impala. You will learn how to apply traditional data analytics and business intelligence skills to big data. Cloudera presents the tools data professionals need to access, manipulate, transform, and analyze complex data sets using SQL and familiar scripting languages.

Apache Pig applies the fundamentals of familiar scripting languages to the Hadoop cluster. Apache Hive makes transformation and analysis of complex, multi-structured data scalable in Hadoop. Cloudera Impala enables real-time interactive analysis of the data stored in Hadoop via a native SQL environment. Together, Pig, Hive, and Impala make multi-structured data accessible to analysts, database administrators, and others without Java programming expertise.

 

Who should attend

This course is designed for data analysts, business intelligence specialists, developers, system architects, and database administrators. Knowledge of SQL is assumed, as is basic Linux command-line familiarity. Knowledge of at least one scripting language (e.g., Bash scripting, Perl, Python, Ruby) would be helpful but is not essential. Prior knowledge of Apache Hadoop is not required.

Prerequisites

  • Knowledge of SQL
  • Basic Linux command-line familiarity
  • Knowledge of at least one scripting language (e.g., Bash scripting, Perl, Python, Ruby)
  • Prior knowledge of Apache Hadoop is not required

Course Objectives

Through instructor-led discussion and interactive, hands-on exercises, participants will navigate the Hadoop ecosystem, learning topics such as:

  • The features that Pig, Hive, and Impala offer for data acquisition, storage, and analysis
  • The fundamentals of Apache Hadoop and data ETL (extract, transform, load), ingestion, and processing with Hadoop
  • How Pig, Hive, and Impala improve productivity for typical analysis tasks
  • Joining diverse datasets to gain valuable business insight
  • Performing real-time, complex queries on datasets

Product Description

  • Introduction
  • Apache Hadoop Fundamentals
  • Introduction to Apache Pig
  • Basic Data Analysis with Apache Pig
  • Processing Complex Data with Apache Pig
  • Multi-Dataset Operations with Apache Pig
  • Apache Pig Troubleshooting and Optimization
  • Introduction to Apache Hive and Impala
  • Querying with Apache Hive and Impala
  • Apache Hive and Impala Data Management
  • Data Storage and Performance
  • Relational Data Analysis with Apache Hive and Impala
  • Complex Data with Apache Hive and Impala
  • Analyzing Text with Apache Hive and Impala
  • Apache Hive Optimization
  • Apache Impala Optimization
  • Extending Apache Hive and Impala
  • Choosing the Best Tool for the Job
  • Conclusion

Outline

Introduction
Apache Hadoop Fundamentals
  • The Motivation for Hadoop
  • Hadoop Overview
  • Data Storage: HDFS
  • Distributed Data Processing: YARN, MapReduce, and Spark
  • Data Processing and Analysis: Pig, Hive, and Impala
  • Database Integration: Sqoop
  • Other Hadoop Data Tools
  • Exercise Scenarios
Introduction to Apache Pig
  • What is Pig?
  • Pig’s Features
  • Pig Use Cases
  • Interacting with Pig
Basic Data Analysis with Apache Pig
  • Pig Latin Syntax
  • Loading Data
  • Simple Data Types
  • Field Definitions
  • Data Output
  • Viewing the Schema
  • Filtering and Sorting Data
  • Commonly Used Functions
Processing Complex Data with Apache Pig
  • Storage Formats
  • Complex/Nested Data Types
  • Grouping
  • Built-In Functions for Complex Data
  • Iterating Grouped Data
Multi-Dataset Operations with Apache Pig
  • Techniques for Combining Datasets
  • Joining Datasets in Pig
  • Set Operations
  • Splitting Datasets
Apache Pig Troubleshooting and Optimization
  • Troubleshooting Pig
  • Logging
  • Using Hadoop’s Web UI
  • Data Sampling and Debugging
  • Performance Overview
  • Understanding the Execution Plan
  • Tips for Improving the Performance of Pig Jobs
Introduction to Apache Hive and Impala
  • What is Hive?
  • What is Impala?
  • Why Use Hive and Impala?
  • Schema and Data Storage
  • Comparing Hive and Impala to Traditional Databases
  • Use Cases
Querying with Apache Hive and Impala
  • Databases and Tables
  • Basic Hive and Impala Query Language Syntax
  • Data Types
  • Using Hue to Execute Queries
  • Using Beeline (Hive’s Shell)
  • Using the Impala Shell
Apache Hive and Impala Data Management
  • Data Storage
  • Creating Databases and Tables
  • Loading Data
  • Altering Databases and Tables
  • Simplifying Queries with Views
  • Storing Query Results
Data Storage and Performance
  • Partitioning Tables
  • Loading Data into Partitioned Tables
  • When to Use Partitioning
  • Choosing a File Format
  • Using Avro and Parquet File Formats
Relational Data Analysis with Apache Hive and Impala
  • Joining Datasets
  • Common Built-In Functions
  • Aggregation and Windowing
Complex Data with Apache Hive and Impala
  • Complex Data with Hive
  • Complex Data with Impala
Analyzing Text with Apache Hive and Impala
  • Using Regular Expressions with Hive and Impala
  • Processing Text Data with SerDes in Hive
  • Sentiment Analysis and n-grams in Hive
Apache Hive Optimization
  • Understanding Query Performance
  • Bucketing
  • Indexing Data
  • Hive on Spark
Apache Impala Optimization
  • How Impala Executes Queries
  • Improving Impala Performance
Extending Apache Hive and Impala
  • Custom SerDes and File Formats in Hive
  • Data Transformation with
  • Custom Scripts in Hive
  • User-Defined Functions
  • Parameterized Queries
Choosing the Best Tool for the Job
  • Comparing Pig, Hive, Impala, and Relational Databases
  • Which to Choose?
Conclusion
E-Learning
Price (excl. tax)
  • US$ 2,235.—

Subscription duration: 180 days