Course Outline

 

Introduction:

  • Apache Spark in Hadoop Ecosystem
  • Short intro for python, scala

Basics (theory):

  • Architecture
  • RDD
  • Transformation and Actions
  • Stage, Task, Dependencies

Using Databricks environment understand the basics (hands-on workshop):

  • Exercises using RDD API
  • Basic action and transformation functions
  • PairRDD
  • Join
  • Caching strategies
  • Exercises using DataFrame API
  • SparkSQL
  • DataFrame: select, filter, group, sort
  • UDF (User Defined Function)
  • Looking into DataSet API
  • Streaming

Using AWS environment understand the deployment (hands-on workshop):

  • Basics of AWS Glue
  • Understand differencies between AWS EMR and AWS Glue
  • Example jobs on both environment
  • Understand pros and cons

Extra:

  • Introduction to Apache Airflow orchestration

Requirements

Programing skills (preferably python, scala)

SQL basics

 21 Hours

Testimonials (2)

Related Courses

Python and Spark for Big Data (PySpark)

21 Hours

Introduction to Graph Computing

28 Hours

Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP

21 Hours

Apache Spark MLlib

35 Hours

Big Data Analytics in Health

21 Hours

Hadoop and Spark for Administrators

35 Hours

Hortonworks Data Platform (HDP) for Administrators

21 Hours

A Practical Introduction to Stream Processing

21 Hours

Magellan: Geospatial Analytics on Spark

14 Hours

Apache Spark for .NET Developers

21 Hours

SMACK Stack for Data Science

14 Hours

Apache Spark Fundamentals

21 Hours

Administration of Apache Spark

35 Hours

Spark for Developers

21 Hours

Scaling Data Pipelines with Spark NLP

14 Hours

Related Categories

1