DP-3014: Build machine learning solutions using Azure Databricks

DP-3014: Build machine learning solutions using Azure Databricks

Duration: 1 Day

Azure Databricks is a fully managed, cloud-based data analytics platform, which empowers developers to accelerate AI and innovation by simplifying the process of building enterprise-grade data applications. Built as a joint effort by Microsoft and the team that started Apache Spark, Azure Databricks provides data science, engineering, and analytical teams with a single platform for big data processing and machine learning. In this course, you’ll learn how to use Azure Databricks to train and deploy machine learning models.

This course is designed for aspiring data scientists and AI engineers who need to train and manage machine learning models by using Azure Databricks.

Explore Azure Databricks

Azure Databricks is a cloud service that provides a scalable platform for data analytics using Apache Spark.

  • Get started with Azure Databricks
  • Identify Azure Databricks workloads
  • Understand key concepts
  • Data governance using Unity Catalog and Microsoft Purview
  • Exercise - Explore Azure Databricks

Perform data analysis with Azure Databricks

Learn how to perform data analysis using Azure Databricks. Explore various data ingestion methods and how to integrate data from sources like Azure Data Lake and Azure SQL Database. This module guides you through using collaborative notebooks to perform exploratory data analysis (EDA), so you can visualize, manipulate, and examine data to uncover patterns, anomalies, and correlations.

  • Ingest data with Azure Databricks
  • Data exploration tools in Azure Databricks
  • Data analysis using DataFrame APIs
  • Exercise - Explore data with Azure Databricks

Use Apache Spark in Azure Databricks

Azure Databricks is built on Apache Spark and enables data engineers and analysts to run Spark jobs to transform, analyze and visualize data at scale.

  • Get to know Spark
  • Create a Spark cluster
  • Use Spark in notebooks
  • Use Spark to work with data files
  • Visualize data
  • Exercise - Use Spark in Azure Databricks

Manage data with Delta Lake

Delta Lake is a data management solution in Azure Databricks providing features including ACID transactions, schema enforcement, and time travel ensuring data consistency, integrity, and versioning capabilities.

  • Get started with Delta Lake
  • Create Delta tables
  • Implement schema enforcement
  • Data versioning and time travel in Delta Lake
  • Data integrity with Delta Lake
  • Exercise - Use Delta Lake in Azure Databricks

Building Lakeflow Declarative Pipelines enables real-time, scalable, and reliable data processing using Delta Lake's advanced features in Azure DatabricksBuild Lakeflow Declarative Pipelines

  • Explore Lakeflow Declarative Pipelines
  • Data ingestion and integration
  • Real-time processing
  • Exercise - Create a Lakeflow Declarative Pipeline

Deploy workloads with Lakeflow Jobs

Deploying workloads with Lakeflow Jobs involves orchestrating and automating complex data processing pipelines, machine learning workflows, and analytics tasks. In this module, you learn how to deploy workloads with Databricks Lakeflow Jobs.

  • What are Lakeflow Jobs?
  • Understand key components of Lakeflow Jobs
  • Explore the benefits of Lakeflow Jobs
  • Deploy workloads using Lakeflow Jobs
  • Exercise - Create a Lakeflow Job

 

 

 

This class has hands-on labs provided by Go Deploy.