This course teaches how to build QualityStage parallel jobs that investigate, standardize, match, and consolidate data records. Students will gain experience by building an application that combines customer data from three source systems into a single master customer record.
-
This course teaches you to perform, basic and advanced, database administrative tasks using Db2 11.1. These tasks include creating and populating databases and implementing a logical design to support recovery requirements. The access strategies selected by the Db2 Optimizer will be examined using the Db2 Explain tools. Various diagnostic methods will be presented, including using various db2pd command options. Students will learn how to implement automatic archival for database logs and how to plan a redirected database restore to relocate either selected table spaces or an entire database. The REBUILD option of RESTORE, which can build a database copy with a subset of the tablespaces, will be discussed. We will also cover using the TRANSPORT option of RESTORE to copy schemas of objects between two Db2 databases. The selection of indexes to improve application performance and the use of SQL statements to track database performance and health will be covered. This course provides a quick start to Db2 database administration skills for experienced relational Database Administrators (DBA).
The lab demonstrations are performed using DB2 LUW 11.1 for Linux. For some lab tasks, students will have the option to complete the task using a DB2 command line processor, or using the graphical interface provided by IBM Data Server Manager.
If you are enrolling in a Self Paced Virtual Classroom or Web Based Training course, before you enroll, please review the Self-Paced Virtual Classes and Web-Based Training Classes on our Terms and Conditions page, as well as the system requirements, to ensure that your system meets the minimum requirements for this course.
-
This WF318G: Developing Applications in IBM Datacap v9.1.7 course provides technical professionals with the skills that are needed to build Datacap applications.
The course begins with an introduction to IBM Datacap. You learn about capture concepts, Datacap process, page identification methods, and architecture. You process batches for Datacap applications in the Datacap clients.
You learn about the design and components of a Datacap application. You build a Datacap application by using Forms Template in Datacap Studio and configure it. You learn how to troubleshoot a Datacap application. You configure a Datacap application to process documents of multiple page types in a single batch. You implement OCR and OMR to extract data from data fields and from multiple choice check boxes. You export data to a text file and also to an IBM FileNet Content Manager repository. You build page layouts, create virtual page blocks, and extract data from tables and label-value pairs. Through instructor-led presentations and hands-on lab exercises, you learn about the core features of IBM Datacap.
-
What is machine learning, and what kinds of problems can it solve? Why are neural networks so popular right now? How can you improve data quality and perform exploratory data analysis? How can you set up a supervised learning problem and find a good, generalizable solution using gradient descent? In this course, you’ll learn how to write distributed machine learning models that scale in Tensorflow 2.x, perform feature engineering in BQML and Keras, evaluate loss curves and perform hyperparameter tuning, and train models at scale with Cloud AI Platform.
-
The Automation Business Analyst Associate training is designed to equip you with the essential skills and knowledge needed to excel as an Automation Business Analyst, specifically focusing on the UiPath Automation Implementation Methodology. It covers both foundational and in-depth concepts.
Additionally, it allows you to develop your skills by going through practical case studies, thus gaining some hands-on experience.
Ready to embark on this transformative learning experience? Let’s go!
-
Create and configure production-grade ROSA clusters as part of a larger AWS customer’s footprint.
Creating and Configuring Production ROSA Clusters (CS220) teaches how to configure ROSA clusters as part of pre-existing AWS environments and how to integrate ROSA with AWS services commonly used by IT operations teams, such as Amazon CloudWatch.
-
Use Red Hat OpenShift to manage OpenStack services and RHEL compute nodes that run VM-based workloads.
The CL170: OpenStack Administration: Control Plane Management course helps Red Hat OpenStack cluster administrators to manage the health and performance of OpenStack control plane services, to troubleshoot issues by inspecting Kubernetes operators and workloads, and to configure OpenStack control plane services by using Kubernetes custom resources.
This course is based on Red Hat OpenShift Services on OpenStack 18.
-
Learn the essential skills to migrate virtual machines to Red Hat OpenShift Virtualization.
The Migrating Virtual Machines to Red Hat OpenShift Virtualization with Ansible Automation Platform (DO346) course provides the essential knowledge to migrate virtual machines to Red Hat OpenShift Virtualization by using carefully selected content from Managing Virtual Machines in Red Hat OpenShift Virtualization (DO316) and Automate and Manage Red Hat OpenShift Virtualization with Ansible (DO336). This Red Hat course provides a shorter learning path for IT professionals to migrate their virtualized workloads to OpenShift Virtualization.
This course provides the following information and skills:
- An introduction to key OpenShift and Kubernetes concepts, such as nodes, pods, and operators
- Skills to deploy the OpenShift Virtualization operator
- Skills to configure networking and storage for virtual machines
- Strategies to migrate virtual machines from another hypervisor to OpenShift Virtualization by using the migration toolkit for virtualization operator and Ansible Automation Platform
This course is based on OpenShift Container Platform 4.16, OpenShift Virtualization 4.16, and Ansible Automation Platform 2.4.
-
Introduction to configuring and managing Red Hat Single Sign-On for authenticating and authorizing applications
Red Hat Single Sign-On Administration (DO313) is designed for system administrators who want to install, configure and manage Red Hat Single Sign-On servers for securing applications. Learn about the different ways to authenticate and authorize applications using single sign-on standards like OAuth and OpenID Connect (OIDC). You will also learn how to install and configure Red Hat SIngle Sign-On on the OpenShift Container Platform. This course is based on Red Hat Single Sign-On version 7.6.
-
A pragmatic introduction to the Site Reliability Engineering implementation of DevOps
Red Hat Transformational Learning: Introduction to Pragmatic Site Reliability Engineering (TL112) teaches the vocabulary, concepts and cultural considerations required to prepare to adopt an implementation of DevOps referred to as Site Reliability Engineering (SRE). In this course, the history, definitions, and Red Hat specific take on this practice will be explored as the student prepares to continue the learning path of joining or implementing an SRE team.
-
Installing OpenShift on a cloud, virtual, or physical infrastructure.
Red Hat OpenShift Installation Lab (DO322) teaches essential skills for installing an OpenShift cluster in a range of environments, from proof of concept to production, and how to identify customizations that may be required because of the underlying cloud, virtual, or physical infrastructure.
This course is based on Red Hat OpenShift Container Platform 4.6.
-
This GCP-MLOF: MLOps (Machine Learning Operations) Fundamentals course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production.
Machine Learning Engineering professionals use tools for continuous improvement and evaluation of deployed models. They work with (or can be) Data Scientists, who develop models, to enable velocity and rigor in deploying the best performing models.