This course is designed to prepare DB2 Linux, UNIX and Windows database administrators for planning, installing, managing and monitoring a DB2 pureScale database system. This course covers the features and functions of the DB2 pureScale feature for DB2 10.5, including fixpack levels 4 and 5. This is a lecture-only course.
-
This classroom ES15G: z/OS Facilities course introduces the base elements, optional features, and servers that are provided in z/OS. It focuses on the system service facilities that are provided by the z/OS Base Control Program (BCP). It teaches the students the functions of major software base elements in the management of jobs, tasks, storage, data, and problems. It also addresses how these functions can be affected by the system programmer.
Students are introduced to the services provided by the servers which execute in the z/OS environments, such as the Communications Server and the Security Server. Installation packaging options and steps to install the z/OS environments also are introduced.
-
This course introduces and explains the System Automation for z/OS (SA z/OS) commands that are used for system operations. In this course, the System Automation for z/OS automation manager and automation agent run in a z/OS 2.2 environment. The automation platform, Tivoli NetView for z/OS is at version 6 release 2. The course uses several automation scenarios in single and multisystem configurations to demonstrate the concepts that are taught in the lessons. This training class is delivered in an environment with multiple opportunities for hands-on lab exercises.
-
This course is designed to teach how to manage VSAM and non-VSAM data sets by coding and using the functions and features of the Access Method Services program, IDCAMS.
To reinforce the lecture material, machine exercises are provided that enable students to code and test selected IDCAMS commands such as DEFINE, REPRO, ALTER, and LISTCAT.
Learn to manage Virtual Storage Access Method (VSAM) and non-VSAM data sets. Particularly emphasize coding and using the functions of the IDCAMS program. Lab exercises enable you to code and test selected IDCAMS commands, such as DEFINE, REPRO, ALTER, and LISTCAT.
Hands-On Labs
Eight labs are included to address:
- IDCAMS commands, including ALTER, DEFINE, CLUSTER, EXPORT, IMPORT, EXAMINE, LISTCAT, REPRO, and PRINT
- tuning VSAM and the VSAM buffers
- alternate indexes
-
This course is an intermediate course designed to teach Collaboration and Deployment users object and asset management, security, shared resource usage, automation, and interaction with IBM SPSS Modeler Gold. Students focus on the makeup of the content repository and its objects. They will learn how to manage repository objects, the logical hierarchy structure, and how to import, export, and promote objects for use in multi-repository environments. Students will become familiar with the components of jobs and the mechanisms to set up, order, and relate job steps. Scheduling, parameters, job monitoring, job history, and event notification are discussed. Finally, the role of Collaboration and Deployment Services in Modeler Gold is discussed, addressing Real Time Scoring, Analytic Data View, and Model Management.
-
This course is designed to introduce advanced parallel job development techniques in DataStage v11.5. In this course you will develop a deeper understanding of the DataStage architecture, including a deeper understanding of the DataStage development and runtime environments. This will enable you to design parallel jobs that are robust, less subject to errors, reusable, and optimized for better performance.
-
This WM154G: IBM MQ v9 System Administration (using Linux for labs) course provides technical professionals with the skills that are needed to administer IBM MQ queue managers on distributed operating systems and in the Cloud. In addition to the instructor-led lectures, you participate in hands-on lab exercises that are designed to reinforce lecture content. The lab exercises use IBM MQ V9.0, giving you practical experience with tasks such as handling queue recovery, implementing security, and problem determination.
Note: This course does not cover any of the features of MQ for z/OS or MQ for IBM i.
-
This course provides training on customizing and extending the IBM Content Navigator features. You learn how to develop plug-ins and implement External Data Services. You also learn how to create a custom workflow step processor. You use the student guide and an IBM Content Navigator system to complete the learning.
If you are enrolling in a Self Paced Virtual Classroom or Web Based Training course, before you enroll, please review the Self-Paced Virtual Classes and Web-Based Training Classes on our Terms and Conditions page, as well as the system requirements, to ensure that your system meets the minimum requirements for this course.
-
This course provides participants with a high level overview of the IBM Cognos Analytics suite of products and their underlying architecture. They will examine each component as it relates to an Analytics solution. Participants will be shown a range of resources to provide additional information on each product
-
This CE121G: IBM DB2 SQL Workshop course provides an introduction to the SQL language.
This course is appropriate for customers working in all DB2 environments, that is, z/OS, VM/VSE, iSeries, Linux, UNIX, and Windows. It is also appropriate for customers working in an Informix environment.
-
This KM700G: IBM BigIntegrate for Data Engineers v11.5.0.2 course teaches data engineers how to run DataStage jobs in a Hadoop environment. You will run jobs in traditional and YARN mode, access HDFS files and Hive tables using different file formats and connector stages.
-
This KM413G: IBM InfoSphere Advanced QualityStage v11.5 course will step you through the Quality Stage data cleansing process. You will transform an unstructured data source into a format suitable for loading into an existing data target. You will cleanse the source data by building a customer rule set that you create and use that rule set to standardize the data. You will next build a reference match to relate the cleansed source data to the existing target data.