You searched for: ""

  • 5 Days ILT, VILT

    The course on Oracle Linux System Administration covers up a variety of skills such as, installation, troubleshooting and monitoring, employing the Unbreakable Enterprise Kernel, constructing the Linux services, and setting up the system for Oracle Database.

    This course educates trainees on how to handle usual problems encountered by administrators in authenticating, monitoring and security.

  • 2 Days ILT, VILT

    Ansible provides you a solution to the tedious task of administering a server fleet. Ansible provides a platform for configuring and managing compute instances.

    In this course, you will learn the basics of working with Ansible in OCI, including installation and configuration. Also, you will discover the benefits of Infrastructure as Code (IaC) via Ansible, understand and create the building blocks for Ansible to be used in the form of playbooks. Lastly you will be able to create playbooks and configure and manage resources on OCI.

  • 2 Days ILT, VILT

    In this course, you will learn to install the Go and Typescript SDKs for OCI and write codes to deploy various core OCI Services.

  • 2 Days ILT, VILT

    Oracle Cloud Infrastructure provides a number of Software Development Kits (SDKs) and a Command Line Interface (CLI) to facilitate development of custom solutions. In this course, learn to install the Python SDK for OCI along with OCI CLI and write code to deploy various core OCI Services.

  • 5 Days ILT, VILT

    This Oracle WebLogic Server 14c: Administration II training is a continuation of Oracle WebLogic Server 14c: Administration I. It teaches you how to perform important administrative tasks, employing best practices that enable you to make the most of your WebLogic applications.

  • OJ-SE17-PC: Java SE 17: Programming Complete

    Price range: RM6,750.00 through RM7,853.00
    5 Days ILT, VILT

    Java SE 17 Developer

    This course is intended for students with some programming experience and is a comprehensive training for the Java programming language.

  • 5 Days ILT, VILT

    The Oracle WebLogic Server 14c: Administration I certification is a globally recognized qualification designed to validate an individual’s expertise in administering Oracle WebLogic Server solutions. It covers critical aspects such as creating WebLogic domains, deployments, and undertaking troubleshooting.

    Gaining this certification verifies an individual’s aptitude in Oracle WebLogic Server’s core functionality and its application within enterprise environments. Many industries regard this certification highly as it helps in optimizing system performance, scalability, and reliability.

    Thus, the certification is highly sought after by systems administrators, architects, and developers seeking to enhance their skills in leveraging Oracle WebLogic Server for building and deploying enterprise applications.

  • OM-DA8: MySQL 8.0 Database Administrator

    Price range: RM6,750.00 through RM7,853.00
    5 Days ILT, VILT

    The MySQL for Database Administrators enables DBAs and other database professionals to maximize their organization’s investment in MySQL. Learn to configure the MySQL Server, set up replication and security, perform database backups and recoveries, optimize query performance, and configure for high availability.

  • DTB-ASPD: Apache Spark Programming with Databricks

    Price range: RM6,750.00 through RM7,650.00
    2 Days ILT, VILT

    This course serves as an appropriate entry point to learn Apache Spark Programming with Databricks.

    Below, we describe each of the four, four-hour modules included in this course.

    Introduction to Apache Spark

    This course offers essential knowledge of Apache Spark, with a focus on its distributed architecture and practical applications for large-scale data processing. Participants will explore programming frameworks, learn the Spark DataFrame API, and develop skills for reading, writing, and transforming data using Python-based Spark workflows.

    Developing Applications with Apache Spark

    Master scalable data processing with Apache Spark in this hands-on course. Learn to build efficient ETL pipelines, perform advanced analytics, and optimize distributed data transformations using Spark’s DataFrame API. Explore grouping, aggregation, joins, set operations, and window functions. Work with complex data types like arrays, maps, and structs while applying best practices for performance optimization.

    Stream Processing and Analysis with Apache Spark

    Learn the essentials of stream processing and analysis with Apache Spark in this course. Gain a solid understanding of stream processing fundamentals and develop applications using the Spark Structured Streaming API. Explore advanced techniques such as stream aggregation and window analysis to process real-time data efficiently. This course equips you with the skills to create scalable and fault-tolerant streaming applications for dynamic data environments.

    Monitoring and Optimizing Apache Spark Workloads on Databricks

    This course explores the Lakehouse architecture and Medallion design for scalable data workflows, focusing on Unity Catalog for secure data governance, access control, and lineage tracking. The curriculum includes building reliable, ACID-compliant pipelines with Delta Lake. You’ll examine Spark optimization techniques, such as partitioning, caching, and query tuning, and learn performance monitoring, troubleshooting, and best practices for efficient data engineering and analytics to address real-world challenges.

  • DTB-DED: Data Engineering with Databricks

    Price range: RM6,750.00 through RM7,650.00
    2 Days ILT, VILT

    This is an introductory course that serves as an appropriate entry point to learn Data Engineering with Databricks.

    Below, we describe each of the four, four-hour modules included in this course.

    1. Data Ingestion with Lakeflow Connect

    This course provides a comprehensive introduction to Lakeflow Connect as a scalable and simplified solution for ingesting data into Databricks from a variety of data sources. You will begin by exploring the different types of connectors within Lakeflow Connect (Standard and Managed), learn about various ingestion techniques, including batch, incremental batch, and streaming, and then review the key benefits of Delta tables and the Medallion architecture.

    From there, you will gain practical skills to efficiently ingest data from cloud object storage using Lakeflow Connect Standard Connectors with methods such as CREATE TABLE AS (CTAS), COPY INTO, and Auto Loader, along with the benefits and considerations of each approach. You will then learn how to append metadata columns to your bronze level tables during ingestion into the Databricks data intelligence platform. This is followed by working with the rescued data column, which handles records that don’t match the schema of your bronze table, including strategies for managing this rescued data.

    The course also introduces techniques for ingesting and flattening semi-structured JSON data, as well as enterprise-grade data ingestion using Lakeflow Connect Managed Connectors.

    Finally, learners will explore alternative ingestion strategies, including MERGE INTO operations and leveraging the Databricks Marketplace, equipping you with foundational knowledge to support modern data engineering ingestion.

    2. Deploy Workloads with Lakeflow Jobs

    Deploy Workloads with Lakeflow Jobs course teaches how to orchestrate and automate data, analytics, and AI workflows using Lakeflow Jobs. You will learn to make robust, production-ready pipelines with flexible scheduling, advanced orchestration, and best practices for reliability and efficiency-all natively integrated within the Databricks Data intelligence Platform. Prior experience with Databricks, Python and SQL is recommended.

    3. Build Data Pipelines with Lakeflow Spark Declarative Pipelines 

    This course introduces users to the essential concepts and skills needed to build data pipelines using Lakeflow Spark Declarative Pipelines (SDP) in Databricks for incremental batch or streaming ingestion and processing through multiple streaming tables and materialized views. Designed for data engineers new to Spark Declarative Pipelines, the course provides a comprehensive overview of core components such as incremental data processing, streaming tables, materialized views, and temporary views, highlighting their specific purposes and differences.

    Topics covered include:

    – Developing and debugging ETL pipelines with the multi-file editor in Spark Declarative Pipelines using SQL (with Python code examples provided)

    – How Spark Declarative Pipelines track data dependencies in a pipeline through the pipeline graph

    – Configuring pipeline compute resources, data assets, trigger modes, and other advanced options

    Next, the course introduces data quality expectations in Spark Declarative Pipelines, guiding users through the process of integrating expectations into pipelines to validate and enforce data integrity. Learners will then explore how to put a pipeline into production, including scheduling options, and enabling pipeline event logging to monitor pipeline performance and health.

    Finally, the course covers how to implement Change Data Capture (CDC) using the AUTO CDC INTO syntax within Spark Declarative Pipelines to manage slowly changing dimensions (SCD Type 1 and Type 2), preparing users to integrate CDC into their own pipelines.

    4. Data Management and Governance with Unity Catalog

    In this course, you’ll learn about data management and governance using Databricks Unity Catalog. It covers foundational concepts of data governance, complexities in managing data lakes, Unity Catalog’s architecture, security, administration, and advanced topics like fine-grained access control, data segregation, and privilege management.

    * This course seeks to prepare students to complete the Associate Data Engineering certification exam, and provides the requisite knowledge to take the course Advanced Data Engineering with Databricks.

  • DTB-ADED: Advanced Data Engineering with Databricks

    Price range: RM6,750.00 through RM7,650.00
    2 Days ILT, VILT

    This course serves as an appropriate entry point to learn Advanced Data Engineering with Databricks.

    Below, we describe each of the four, four-hour modules included in this course.

    Databricks Streaming and Lakeflow Spark Declarative Pipelines

    This course provides a comprehensive understanding of Spark Structured Streaming and Delta Lake, including computation models, configuration for streaming read, and maintaining data quality in a streaming environment.

    Databricks Data Privacy

    This content is intended for the learner persona of data engineers or for customers, partners, and employees who complete data engineering tasks with Databricks. It aims to provide them with the necessary knowledge and skills to execute these activities effectively on the Databricks platform.

    Databricks Performance Optimization

    In this course, you’ll learn how to optimize workloads and physical layout with Spark and Delta Lake and and analyze the Spark UI to assess performance and debug applications. We’ll cover topics like streaming, liquid clustering, data skipping, caching, photons, and more.

    Automated Deployment with Databricks Asset Bundles

    This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.

    The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Databricks Asset Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Databricks Asset Bundles for multiple environments with different configurations using the Databricks CLI.

    Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Databricks Asset Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Databricks Asset Bundles.

    By the end of this course, you will be equipped to automate Databricks project deployments with Databricks Asset Bundles, improving efficiency through DevOps practices.

  • DTB-MLD: Machine Learning with Databricks

    Price range: RM6,750.00 through RM7,650.00
    2 Days ILT, VILT

    Welcome to Machine Learning with Databricks!
    This course is your gateway to mastering machine learning workflows on Databricks. Dive into data preparation, model development, deployment, and operations, guided by expert instructors. Learn essential skills for data exploration, model training, and deployment strategies tailored for Databricks. By course end, you’ll have the knowledge and confidence to navigate the entire machine learning lifecycle on the Databricks platform, empowering you to build and deploy robust machine learning solutions efficiently.

Go to Top