In this course, you will learn about the basics of SD-WAN deployment scenarios using the Fortinet Secure SD-WAN solution. You will learn how to configure SDWAN on FortiGate and learn how SD-WAN interacts with FortiGate routing and firewall. This fundamental understanding of SD-WAN topics will help you design, deploy, maintain, and enhance SD-WAN deployments and existing SD-WAN deployment performance.
-
The EXIN BCS Machine Learning Award gives you a clear, structured introduction to machine learning—covering key algorithms, data processing, model training, and real-world applications. You’ll learn how to prepare and transform data, understand supervised and unsupervised learning, and get hands-on insights into programming languages and ML frameworks such as Python, TensorFlow, and Scikit-Learn—even if you’re new to AI.
-
The EXIN BCS Generative AI Award equips you with the essential knowledge to harness generative AI effectively, ethically, and strategically. Whether you’re looking to enhance your career, future-proof your skills, or gain a competitive edge, this certification validates your expertise in one of today’s most transformative technologies.
-
EXIN’s AI Compliance Professional (AICP) is the first certification that directly integrates the EU AI Act, ISO/IEC 42001, and NIST AI RMF into practical, lifecycle-based compliance — ideal for real-world business settings. Designed for professionals who need to implement, audit, and govern AI systems across their entire lifecycle — with ready-to-use templates, checklists, and governance controls. Master AI risk management, data privacy, and responsible AI implementation that customers will trust.
-
This course serves as an appropriate entry point to learn Apache Spark Programming with Databricks.
Below, we describe each of the four, four-hour modules included in this course.
Introduction to Apache Spark
This course offers essential knowledge of Apache Spark, with a focus on its distributed architecture and practical applications for large-scale data processing. Participants will explore programming frameworks, learn the Spark DataFrame API, and develop skills for reading, writing, and transforming data using Python-based Spark workflows.
Developing Applications with Apache Spark
Master scalable data processing with Apache Spark in this hands-on course. Learn to build efficient ETL pipelines, perform advanced analytics, and optimize distributed data transformations using Spark’s DataFrame API. Explore grouping, aggregation, joins, set operations, and window functions. Work with complex data types like arrays, maps, and structs while applying best practices for performance optimization.
Stream Processing and Analysis with Apache Spark
Learn the essentials of stream processing and analysis with Apache Spark in this course. Gain a solid understanding of stream processing fundamentals and develop applications using the Spark Structured Streaming API. Explore advanced techniques such as stream aggregation and window analysis to process real-time data efficiently. This course equips you with the skills to create scalable and fault-tolerant streaming applications for dynamic data environments.
Monitoring and Optimizing Apache Spark Workloads on Databricks
This course explores the Lakehouse architecture and Medallion design for scalable data workflows, focusing on Unity Catalog for secure data governance, access control, and lineage tracking. The curriculum includes building reliable, ACID-compliant pipelines with Delta Lake. You’ll examine Spark optimization techniques, such as partitioning, caching, and query tuning, and learn performance monitoring, troubleshooting, and best practices for efficient data engineering and analytics to address real-world challenges.
-
This course provides a comprehensive introduction to Databricks SQL. Learners will ingest data, write queries, produce visualizations and dashboards, and configure alerts. This course will prepare you to take the Databricks Certified Data Analyst Associate exam.
This course consists of two four-hour modules.
SQL Analytics on Databricks
In this course, you’ll learn how to effectively use Databricks for data analytics, with a specific focus on Databricks SQL. As a Databricks Data Analyst, your responsibilities will include finding relevant data, analyzing it for potential applications, and transforming it into formats that provide valuable business insights.
You will also understand your role in managing data objects and how to manipulate them within the Databricks Data Intelligence Platform, using tools such as Notebooks, the SQL Editor, and Databricks SQL.
Additionally, you will learn about the importance of Unity Catalog in managing data assets and the overall platform. Finally, the course will provide an overview of how Databricks facilitates performance optimization and teach you how to access Query Insights to understand the processes occurring behind the scenes when executing SQL analytics on Databricks.
AI/BI for Data Analysts
In this course, you’ll learn how to use the features Databricks provides for business intelligence needs: AI/BI Dashboards and AI/BI Genie. As a Databricks Data Analyst, you will be tasked with creating AI/BI Dashboards and AI/BI Genie Spaces within the platform, managing the access to these assets by stakeholders and necessary parties, and maintaining these assets as they are edited, refreshed, or decommissioned over the course of their lifespan. This course intends to instruct participants on how to design dashboards for business insights, share those with collaborators and stakeholders, and maintain those assets within the platform. Participants will also learn how to utilize AI/BI Genie Spaces to support self-service analytics through the creation and maintenance of these environments powered by the Databricks Data Intelligence Engine.
-
This is an introductory course that serves as an appropriate entry point to learn Data Engineering with Databricks.
Below, we describe each of the four, four-hour modules included in this course.
1. Data Ingestion with Lakeflow Connect
This course provides a comprehensive introduction to Lakeflow Connect as a scalable and simplified solution for ingesting data into Databricks from a variety of data sources. You will begin by exploring the different types of connectors within Lakeflow Connect (Standard and Managed), learn about various ingestion techniques, including batch, incremental batch, and streaming, and then review the key benefits of Delta tables and the Medallion architecture.
From there, you will gain practical skills to efficiently ingest data from cloud object storage using Lakeflow Connect Standard Connectors with methods such as CREATE TABLE AS (CTAS), COPY INTO, and Auto Loader, along with the benefits and considerations of each approach. You will then learn how to append metadata columns to your bronze level tables during ingestion into the Databricks data intelligence platform. This is followed by working with the rescued data column, which handles records that don’t match the schema of your bronze table, including strategies for managing this rescued data.
The course also introduces techniques for ingesting and flattening semi-structured JSON data, as well as enterprise-grade data ingestion using Lakeflow Connect Managed Connectors.
Finally, learners will explore alternative ingestion strategies, including MERGE INTO operations and leveraging the Databricks Marketplace, equipping you with foundational knowledge to support modern data engineering ingestion.
2. Deploy Workloads with Lakeflow Jobs
Deploy Workloads with Lakeflow Jobs course teaches how to orchestrate and automate data, analytics, and AI workflows using Lakeflow Jobs. You will learn to make robust, production-ready pipelines with flexible scheduling, advanced orchestration, and best practices for reliability and efficiency-all natively integrated within the Databricks Data intelligence Platform. Prior experience with Databricks, Python and SQL is recommended.
3. Build Data Pipelines with Lakeflow Spark Declarative Pipelines
This course introduces users to the essential concepts and skills needed to build data pipelines using Lakeflow Spark Declarative Pipelines (SDP) in Databricks for incremental batch or streaming ingestion and processing through multiple streaming tables and materialized views. Designed for data engineers new to Spark Declarative Pipelines, the course provides a comprehensive overview of core components such as incremental data processing, streaming tables, materialized views, and temporary views, highlighting their specific purposes and differences.
Topics covered include:
– Developing and debugging ETL pipelines with the multi-file editor in Spark Declarative Pipelines using SQL (with Python code examples provided)
– How Spark Declarative Pipelines track data dependencies in a pipeline through the pipeline graph
– Configuring pipeline compute resources, data assets, trigger modes, and other advanced options
Next, the course introduces data quality expectations in Spark Declarative Pipelines, guiding users through the process of integrating expectations into pipelines to validate and enforce data integrity. Learners will then explore how to put a pipeline into production, including scheduling options, and enabling pipeline event logging to monitor pipeline performance and health.
Finally, the course covers how to implement Change Data Capture (CDC) using the AUTO CDC INTO syntax within Spark Declarative Pipelines to manage slowly changing dimensions (SCD Type 1 and Type 2), preparing users to integrate CDC into their own pipelines.
4. Data Management and Governance with Unity Catalog
In this course, you’ll learn about data management and governance using Databricks Unity Catalog. It covers foundational concepts of data governance, complexities in managing data lakes, Unity Catalog’s architecture, security, administration, and advanced topics like fine-grained access control, data segregation, and privilege management.
* This course seeks to prepare students to complete the Associate Data Engineering certification exam, and provides the requisite knowledge to take the course Advanced Data Engineering with Databricks.
-
This course serves as an appropriate entry point to learn Advanced Data Engineering with Databricks.
Below, we describe each of the four, four-hour modules included in this course.
Databricks Streaming and Lakeflow Spark Declarative Pipelines
This course provides a comprehensive understanding of Spark Structured Streaming and Delta Lake, including computation models, configuration for streaming read, and maintaining data quality in a streaming environment.
Databricks Data Privacy
This content is intended for the learner persona of data engineers or for customers, partners, and employees who complete data engineering tasks with Databricks. It aims to provide them with the necessary knowledge and skills to execute these activities effectively on the Databricks platform.
Databricks Performance Optimization
In this course, you’ll learn how to optimize workloads and physical layout with Spark and Delta Lake and and analyze the Spark UI to assess performance and debug applications. We’ll cover topics like streaming, liquid clustering, data skipping, caching, photons, and more.
Automated Deployment with Databricks Asset Bundles
This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.
The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Databricks Asset Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Databricks Asset Bundles for multiple environments with different configurations using the Databricks CLI.
Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Databricks Asset Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Databricks Asset Bundles.
By the end of this course, you will be equipped to automate Databricks project deployments with Databricks Asset Bundles, improving efficiency through DevOps practices.
-
Welcome to Machine Learning with Databricks!
This course is your gateway to mastering machine learning workflows on Databricks. Dive into data preparation, model development, deployment, and operations, guided by expert instructors. Learn essential skills for data exploration, model training, and deployment strategies tailored for Databricks. By course end, you’ll have the knowledge and confidence to navigate the entire machine learning lifecycle on the Databricks platform, empowering you to build and deploy robust machine learning solutions efficiently. -
In this course, you will be provided with a comprehensive understanding of the machine learning lifecycle and MLOps, emphasizing best practices for data and model management, testing, and scalable architectures. It covers key MLOps components, including CI/CD, pipeline management, and environment separation, while showcasing Databricks’ tools for automation and infrastructure management, such as Databricks Asset Bundles (DABs), Workflows, and Mosaic AI Model Serving. You will learn about monitoring, custom metrics, drift detection, model rollout strategies, A/B testing, and the principles of reliable MLOps systems, providing a holistic view of implementing and managing ML projects in Databricks.
-
In this course, you will gain theoretical and practical knowledge of Apache Spark’s architecture and its application to machine learning workloads within Databricks. You will learn when to use Spark for data preparation, model training, and deployment, while also gaining hands-on experience with Spark ML and pandas APIs on Spark.
This course will introduce you to advanced concepts like hyperparameter tuning and scaling Optuna with Spark. This course will use features and concepts introduced in the associate course such as MLflow and Unity Catalog for comprehensive model packaging and governance.
-
This course is aimed at data scientists, machine learning engineers, and other data practitioners who want to build generative AI applications using the latest and most popular frameworks and Databricks capabilities.
Below, we describe each of the four, four-hour modules included in this course.
Generative AI Solution Development: This is your introduction to contextual generative AI solutions using the retrieval-augmented generation (RAG) method. First, you’ll be introduced to RAG architecture and the significance of contextual information using Mosaic AI Playground. Next, we’ll show you how to prepare data for generative AI solutions and connect this process with building a RAG architecture. Finally, you’ll explore concepts related to context embedding, vectors, vector databases, and the utilization of Mosaic AI Vector Search.
Generative AI Application Development: Ready for information and practical experience in building advanced LLM applications using multi-stage reasoning LLM chains and agents? In this module, you’ll first learn how to decompose a problem into its components and select the most suitable model for each step to enhance business use cases. Following this, we’ll show you how to construct a multi-stage reasoning chain utilizing LangChain and HuggingFace transformers. Finally, you’ll be introduced to agents and will design an autonomous agent using generative models on Databricks.
Generative AI Application Evaluation and Governance: This is your introduction to evaluating and governing generative AI systems. First, you’ll explore the meaning behind and motivation for building evaluation and governance/security systems. Next, we’ll connect evaluation and governance systems to the Databricks Data Intelligence Platform. Third, we’ll teach you about a variety of evaluation techniques for specific components and types of applications. Finally, the course will conclude with an analysis of evaluating entire AI systems with respect to performance and cost.
Generative AI Application Deployment and Monitoring: Ready to learn how to deploy, operationalize, and monitor generative deploying, operationalizing, and monitoring generative AI applications? This module will help you gain skills in the deployment of generative AI applications using tools like Model Serving. We’ll also cover how to operationalize generative AI applications following best practices and recommended architectures. Finally, we’ll discuss the idea of monitoring generative AI applications and their components using Lakehouse Monitoring.
-
This AIOps Foundation course aims to cover the origins of AIOps including the history behind the term, patterns that preceded it and the technology context in which it has evolved.
- Gain an understanding of the processes of combining big data analytics, machine learning algorithms, automation, and optimization into a single platform.
- Learn key principles and foundational concepts along with the core technologies of AIOps: big data and machine learning
- Validate understanding of how and why digital transformation, together with the evolution of machine learning, have brought about the rise of AIOps as an indispensable tool in today’s IT Operational landscape.
This foundation course will also provide the student with a solid understanding of the benefits of implementing AIOps in the organization, including common challenges and key steps in ensuring valuable and successful integration of artificial intelligence in the day to day operations of information technology solutions.
Unique and exciting exercises will be used to apply the concepts covered in the course and sample documents, templates, tools, and techniques will be provided to use after the class.
This course positions learners to successfully complete the AIOps Foundation certification exam.
-
An introduction to developing and deploying AI/ML applications on Red Hat OpenShift AI.
Developing and Deploying AI/ML Applications on Red Hat OpenShift AI (AI267) provides students with the fundamental knowledge about using Red Hat OpenShift for developing and deploying AI/ML applications. This course helps students build core skills for using Red Hat OpenShift AI to train, develop and deploy machine learning models through hands-on experience.
This course is based on Red Hat OpenShift ® 4.16, and Red Hat OpenShift AI 2.13.
-
This course provides a comprehensive introduction to Microsoft 365, Copilot, and AI-powered agents. It introduces learners to the foundational concepts, core services, and administrative controls of Microsoft 365. It then builds upon this foundation by exploring how Copilot and agents can utilize AI to automate tasks, enhance collaboration, and personalize user experiences across the Microsoft 365 suite.
-
In this course, learners will discover how to apply generative AI to streamline daily tasks, enhance decision-making, and drive meaningful business outcomes. Learners will understand how to use Microsoft 365 Copilot and its functionalities to improve their productivity. The course focuses on real-world use cases—no coding required—making it ideal for those who want to confidently integrate AI into their work.
-
In this course, learners will explore how to lead AI transformation across their organization. They’ll learn practical strategies to identify high-impact AI opportunities, align investments with business goals, and champion responsible AI practices.
The course emphasizes real-world applications and strategic decision-making—no technical expertise required—making it ideal for senior leaders who want to confidently drive AI adoption and innovation.
-
-20%
This five-day course provides you with the knowledge, skills, and abilities to achieve competence in deploying, configuring and managing VMware vSphere Foundation. You will learn about the architecture of vSphere Foundation, compute, storage, networks and licensing.
This course prepares you to administer a vSphere Foundation, which includes VCF Operations 9.0, vCenter 9.0, and ESX 9.0.
-
Experience the possibilities of MLOps through proven open culture and practices used by Red Hat to support customer innovation.
-
MLOps Practices with Red Hat OpenShift AI (AI500) is a five-day immersive class that guides attendees through a complete MLOps adoption journey. Unlike trainings focused on a single framework or tool, it demonstrates how leading open-source technologies integrate into a full MLOps workflow, blending continuous discovery, training, and delivery in realistic machine learning scenarios.
-
Cross-functional participation is essential for achieving the learning goals. Data scientists, ML engineers, platform engineers, architects, and product owners collaborate in a simulated real-world delivery environment. This daily routine shows how breaking down silos and working as a unified team drives innovation, equips participants with shared best practices, and strengthens organizational culture and processes.
-
The course is built on Red Hat technologies, specifically Red Hat OpenShift AI, Red Hat OpenShift GitOps, and Predictive AI, providing a practical foundation for applying modern MLOps methodologies.
-
-
The Advanced Generative AI Development on AWS is designed for developers seeking to master the implementation of production-ready generative AI solutions on AWS. The course addresses the needs of organizations embarking on their generative AI journey and how to build comprehensive generative AI strategies that align with broader business objectives.
This advanced 3-day instructor-led training builds expertise across the entire generative AI stack – from foundation models to enterprise integration patterns. In addition, you will learn about advanced data processing techniques, vector database implementation and retrieval augmentation, sophisticated prompt engineering and governance, agentic AI systems and tool integration, AI safety and security measures, performance optimization and cost management strategies, comprehensive monitoring and observability solutions, testing and validation frameworks.
The course structure follows AWS’s proven model for generative AI adoption, progressing from experimentation to production-ready implementations.
-
XDR is the industry’s most powerful extended detection and response platform. You will gain hands-on expertise in endpoint management, case management, forensic analysis and platform automation. Throughout this course, you will explore the key features of Cortex XDR.
-
This 3-day instructor-led course provides in-depth training on Cortex XDR, Palo Alto Networks’ powerful extended detection and response platform. You will gain hands-on expertise in security operations, incident investigation, and system optimization to effectively protect modern environments. Throughout this course you will explore the key features of Cortex XDR.
-
The Prisma Access SSE: Configuration and Deployment course introduces you to the operational deployment of Prisma Access Secure Access Service Edge (SASE) and how it helps organizations embrace the needs of the modern workforce by providing network connectivity and network security services from the cloud.
This course will enable you to deploy, configure, maintain, and troubleshoot Prisma Access using Strata Cloud Manager. The course is intended for professionals in cybersecurity and public-cloud security, as well as general network-security professionals who want to learn how to secure remote networks and mobile users.
-
The course provides the Oracle Database 19c new features and enhancements related to database overall, security, availability, performance, data warehousing, and diagnosability. In the lessons, you learn the new and enhanced features of Oracle Database 19c amongst different areas such as database overall, security, availability, performance, big data and warehousing, and diagnosability.






