Course Overview
This course serves as an appropriate entry point to learn Advanced Data Engineering with Databricks.
Below, we describe each of the four, four-hour modules included in this course.
Databricks Streaming and Lakeflow Spark Declarative Pipelines
This course provides a comprehensive understanding of Spark Structured Streaming and Delta Lake, including computation models, configuration for streaming read, and maintaining data quality in a streaming environment.
Databricks Data Privacy
This content is intended for the learner persona of data engineers or for customers, partners, and employees who complete data engineering tasks with Databricks. It aims to provide them with the necessary knowledge and skills to execute these activities effectively on the Databricks platform.
Databricks Performance Optimization
In this course, you’ll learn how to optimize workloads and physical layout with Spark and Delta Lake and and analyze the Spark UI to assess performance and debug applications. We’ll cover topics like streaming, liquid clustering, data skipping, caching, photons, and more.
Automated Deployment with Databricks Asset Bundles
This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.
The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Databricks Asset Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Databricks Asset Bundles for multiple environments with different configurations using the Databricks CLI.
Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Databricks Asset Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Databricks Asset Bundles.
By the end of this course, you will be equipped to automate Databricks project deployments with Databricks Asset Bundles, improving efficiency through DevOps practices.
What are the skills covered
- Databricks Streaming and Lakeflow Spark Declarative Pipelines
- Databricks Data Privacy
- Databricks Performance Optimization
- Automated Deployment with Databricks Asset Bundles
Who should attend this course
- Everyone who is interested
Course Curriculum
What are the Prerequisites
- Ability to perform basic code development tasks using the Databricks Data Engineering and Data Science workspace (create clusters, run code in notebooks, use basic notebook operations, import repos from git, etc.)
- Intermediate programming experience with PySpark
- Extract data from a variety of file formats and data sources
- Apply a number of common transformations to clean data
- Reshape and manipulate complex data using advanced built-in functions
- Intermediate programming experience with Delta Lake (create tables, perform complete and incremental updates, compact files, restore previous versions, etc.)
- Beginner experience configuring and scheduling data pipelines using the Lakeflow Spark Declarative Pipelines UI
- Beginner experience defining Lakeflow Spark Declarative Pipelines using PySpark
- Ingest and process data using Auto Loader and PySpark syntax
- Process Change Data Capture feeds with APPLY CHANGES INTO syntax
- Review pipeline event logs and results to troubleshoot Declarative Pipeline syntax
• Strong knowledge of the Databricks platform, including experience with Databricks Workspaces, Apache Spark, Delta Lake, the Medallion Architecture, Unity Catalog, Lakeflow Declarative Pipelines, and Workflows. In particular, knowledge of leveraging Expectations with Lakeflow Declarative Pipelines.
• Experience in data ingestion and transformation, with proficiency in PySpark for data processing and DataFrame manipulation. Candidates should also have experience writing intermediate-level SQL queries for data analysis and transformation.
• Proficiency in Python programming, including the ability to design and implement functions and classes, and experience with creating, importing, and utilizing Python packages.
• Familiarity with DevOps practices, particularly continuous integration and continuous delivery/deployment (CI/CD) principles.
• A basic understanding of Git version control.
• Prerequisite course DevOps Essentials for Data Engineering Course
Course Modules
Exam & Certification
Databricks Certified Data Engineer Professional exam.
The Databricks Certified Data Engineering Professional exam validates a candidate’s advanced skills in building, optimising, and maintaining production-grade data engineering solutions on the Databricks Lakehouse Platform. Successful candidates demonstrate expertise across core platform features such as Delta Lake, Unity Catalog, Auto Loader, Lakeflow Declarative Pipelines, Databricks Compute (including serverless) Lakeflow Jobs and the Medallion Architecture.
This certification assesses the ability to design secure, reliable, and cost-effective ETL Pipelines, process complex data from diverse sources using Python and SQL, and apply best practices in schema management, observability, governance, and performance optimization.
Candidates are also tested on implementing streaming workloads, orchestrating workflows, leveraging DevOps & CI/CD, and deploying with tools like the Databricks CLI, REST API, and Asset Bundles. Individuals who pass this certification exam can be expected to complete advanced data engineering tasks using Databricks and its associated tools.
The exam covers:
- Developing Code for Data Processing using Python and SQL – 22%
- Data Ingestion & Acquisition – 7%
- Data Transformation, Cleansing, and Quality – 10%
- Data Sharing and Federation – 5%
- Monitoring and Alerting – 10%
- Cost & Performance Optimisation – 13%
- Ensuring Data Security and Compliance – 10%
- Data Governance – 7%
- Debugging and Deploying – 10%
- Data Modelling – 6%





