About the course
This course describes how to process Big Data using Azure tools and services including Azure Stream Analytics, Azure Data Lake, Azure SQL Data Warehouse and Azure Data Factory. The course also explains how to include custom functions, and integrate Python and R.
ideal for
The primary audience for this course is data engineers (IT professionals, developers, and information workers) who plan to implement big data engineering workflows on Azure.
Objectives
At the end of this course, students will be able to:
Describe common architectures for processing big data using Azure tools and services.
Describe how to use Azure Stream Analytics to design and implement stream processing over large-scale data.
Describe how to include custom functions and incorporate machine learning activities into an Azure Stream Analytics job.
Describe how to use Azure Data Lake Store as a large-scale repository of data files.
Describe how to use Azure Data Lake Analytics to examine and process data held in Azure Data Lake Store.
Describe how to create and deploy custom functions and operations, integrate with Python and R, and protect and optimize jobs.
Describe how to use Azure SQL Data Warehouse to create a repository that can support large-scale analytical processing over data at rest.
Describe how to use Azure SQL Data Warehouse to perform analytical processing, how to maintain performance, and how to protect the data.
Describe how to use Azure Data Factory to import, transform, and transfer data between repositories and services.
Prerequisite Knowledge
In addition to their professional experience, students who attend this course should have:
A good understanding of Azure data services.
A basic knowledge of the Microsoft Windows operating system and its core functionality.
A good knowledge of relational databases.