This course is an intermediate course designed to teach Collaboration and Deployment users object and asset management, security, shared resource usage, automation, and interaction with IBM SPSS Modeler Gold. Students focus on the makeup of the content repository and its objects. They will learn how to manage repository objects, the logical hierarchy structure, and how to import, export, and promote objects for use in multi-repository environments. Students will become familiar with the components of jobs and the mechanisms to set up, order, and relate job steps. Scheduling, parameters, job monitoring, job history, and event notification are discussed. Finally, the role of Collaboration and Deployment Services in Modeler Gold is discussed, addressing Real Time Scoring, Analytic Data View, and Model Management.
-
This course is designed to introduce advanced parallel job development techniques in DataStage v11.5. In this course you will develop a deeper understanding of the DataStage architecture, including a deeper understanding of the DataStage development and runtime environments. This will enable you to design parallel jobs that are robust, less subject to errors, reusable, and optimized for better performance.
-
This course provides participants with a high level overview of the IBM Cognos Analytics suite of products and their underlying architecture. They will examine each component as it relates to an Analytics solution. Participants will be shown a range of resources to provide additional information on each product
-
This CE121G: IBM DB2 SQL Workshop course provides an introduction to the SQL language.
This course is appropriate for customers working in all DB2 environments, that is, z/OS, VM/VSE, iSeries, Linux, UNIX, and Windows. It is also appropriate for customers working in an Informix environment.
-
This KM700G: IBM BigIntegrate for Data Engineers v11.5.0.2 course teaches data engineers how to run DataStage jobs in a Hadoop environment. You will run jobs in traditional and YARN mode, access HDFS files and Hive tables using different file formats and connector stages.
-
This KM413G: IBM InfoSphere Advanced QualityStage v11.5 course will step you through the Quality Stage data cleansing process. You will transform an unstructured data source into a format suitable for loading into an existing data target. You will cleanse the source data by building a customer rule set that you create and use that rule set to standardize the data. You will next build a reference match to relate the cleansed source data to the existing target data.
-
Administrators of DB2 11 for z/OS can acquire a view of the architecture and fundamental processes required to manage a DB2 11 for z/OS subsystem. Engage in lectures and hands-on labs to gain experience to:
- Relate the z/OS IPL process to a DB2 subsystem
- Explain effects of stopping and starting DB2
- Explain how DB2 sets and use Integrated Catalog Facility (ICF) catalog names
- The use of DSN command processor running in batch and foreground
- Use views to minimize users’ ability to see into the DB2 catalog
- See how the catalog (through grant activity) controls access to data
- Search the catalog for problem situations
- Use the catalog and DB2 utilities to determine data recovery requirements
- Describe Internal Resource Lock Manager (IRLM) in a DB2 environment
- Implement DB2 and Resource Access Control Facility (RACF) security
- Describe DB2 program flow for all environments
- Display normal and problem threads and database status
- See how the SQL Processor Using File Input (SPUFI) AUTOCOMMIT option defers the COMMIT/ROLLBACK decision
- Interpret lock displays
- Identify and cancel particular threads
- Describe available DB2 utilities to manage system and user page sets
-
The CE131G: IBM DB2 SQL Workshop for Experienced Users course teaches you how to make use of advanced SQL techniques to access DB2 databases in different environments. This course is appropriate for customers working in all DB2 environments, specifically for z/OS, Linux, UNIX, and Windows.
-
Out of stock
This course teaches database administrators how to plan, implement and manage Db2 11.1 databases using the High Availability Disaster Recovery HADR) feature. The lectures cover the processing performed for a Db2 Primary and Standby Db2 database. The Db2 database configuration options that define and control the HADR function are covered. The option to define and operate multiple HADR standby databases will be explained. The course also covers the special considerations for allowing read only access by applications to a HADR Standby database. Students will learn the Db2 commands like TAKEOVER, START HADR and STOP HADR that are used to control HADR primary and standby database activity. The monitoring for HADR status of the primary and standby databases using the db2pd commands will be presented. The course also presents usage of HADR with Db2 pureScale databases.
-
Do you want to find match member records, link member records, and perfect a search algorithm for your InfoSphere MDM Virtual and Physical implementation? Then this course is designed for you.
The IBM InfoSphere MDM Algorithms V11 course prepares you to work with and customize the algorithm configurations deployed to the InfoSphere MDM Probabilistic Matching Engine (PME) for a Virtual and Physical MDM implementations. The PME is the heart of all Matching, Linking, and Searching for entities (Person, Organization, etc) that exist in InfoSphere MDM.
This course has a heavy emphasis on the exercises, where you will implement the customization discussed in the course to perform matching, linking, and searching on fields not provided by the default implementation.
At the end of this course it is expected you will feel comfortable customizing an algorithm for the PME for a Virtual and Physical MDM implementations.
-
This course provides authors with an introduction to build reports using Cognos Analytics. Techniques to enhance, customize, and manage reports will be explored. Activities will illustrate and reinforce key concepts during this learning opportunity.
-
This offering teaches Professional Report Authors about advanced report building techniques using relational data models, dimensional data, and ways of enhancing, customizing, managing, and distributing professional reports. The course builds on topics presented in the Fundamentals course.
Activities will illustrate and reinforce key concepts during this learning activity.