What is KM:
Knowledge Modules (KMs) are code templates. Each KM is dedicated to an individual task in the overall data integration process. The code in the KMs appears in nearly the form that it will be executed except that it includes Oracle Data Integrator (ODI) substitution methods enabling it to be used generically by many different integration jobs. The code that is generated and executed is derived from the declarative rules and metadata defined in the ODI Designer module.
Each knowledge module contains the knowledge required by ODI to perform a specific set of actions or tasks against a specific technology or set of technologies such as connecting to this technology, extracting data from it, transforming the data, checking it, integrating it etc.
- A KM will be reused across several interfaces or models. To modify the behavior of hundreds of jobs using hand-coded scripts and procedures, developers would need to modify each script or procedure. In contrast, the benefit of Knowledge Modules is that you make a change once and it is instantly propagated to hundreds of transformations. KMs are based on logical tasks that will be performed. They don't contain references to physical objects (datastores, columns, physical paths, etc.)
- KMs can be analyzed for impact analysis.
- KMs can't be executed standalone. They require metadata from interfaces, datastores and models.
KMs fall into 6 different categories as summarized:
- Retrieves metadata to the Oracle Data Integrator work repository
- Used in models to perform a customized reverse-engineering
- Checks consistency of data against constraints
- Used in models, sub models and datastores for data integrity audit
- Used in interfaces for flow control or static control
Loading KM
- Loads heterogeneous data to a staging area
- Used in interfaces with heterogeneous sources
- Integrates data from the staging area to a target
- Used in interfaces
- Creates the Change Data Capture framework objects in the source staging area
- Used in models, sub models and datastores to create, start and stop journals and to register subscribers.
- Generates data manipulation web services
- Used in models and datastores
A typical RKM follows these steps:
Cleans up the SNP_REV_xx tables from previous executions using the OdiReverseResetTable tool.
Retrieves sub models, datastores, columns, unique keys, foreign keys, conditions from the metadata provider to SNP_REV_SUB_MODEL, SNP_REV_TABLE, SNP_REV_COL, SNP_REV_KEY, SNP_REV_KEY_COL, SNP_REV_JOIN, SNP_REV_JOIN_COL, SNP_REV_COND tables.
Updates the model in the work repository by calling the OdiReverseSetMetaData tool.
Check Knowledge Modules (CKM):
The CKM is in charge of checking that records of a data set are consistent with defined constraints. The CKM is used to maintain data integrity and participates in the overall data quality initiative.
- To check the consistency of existing data. This can be done on any datastore or within interfaces, by setting the STATIC_CONTROL option to "Yes". In the first case, the data checked is the data currently in the datastore. In the second case, data in the target datastore is checked after it is loaded.
- To check consistency of the incoming data before loading the records to a target datastore. This is done by using the FLOW_CONTROL option. In this case, the CKM simulates the constraints of the target datastore on the resulting flow prior to writing to the target.
In summary: the CKM can check either an existing table or the temporary "I$" table created by an IKM.
The CKM accepts a set of constraints and the name of the table to check. It creates an "E$" error table which it writes all the rejected records to. The CKM can also remove the erroneous records from the checked result set.
In STATIC_CONTROL mode, the CKM reads the constraints of the table and checks them against the data of the table. Records that don't match the constraints are written to the "E$" error table in the staging area.
In FLOW_CONTROL mode, the CKM reads the constraints of the target table of the Interface. It checks these constraints against the data contained in the "I$" flow table of the staging area. Records that violate these constraints are written to the "E$" table of the staging area.
In both cases, a CKM usually performs the following tasks:
Create the "E$" error table on the staging area. The error table should contain the same columns as the datastore as well as additional columns to trace error messages, check origin, check date etc.
Isolate the erroneous records in the "E$" table for each primary key, alternate key, foreign key, condition, mandatory column that needs to be checked.
If required, remove erroneous records from the table that has been checked.
Loading Knowledge Modules (LKM):
An LKM is in charge of loading source data from a remote server to the staging area. It is used by interfaces when some of the source datastores are not on the same data server as the staging area. The LKM implements the declarative rules that need to be executed on the source server and retrieves a single result set that it stores in a "C$" table in the staging area.
The LKM creates the "C$" temporary table in the staging area. This table will hold records loaded from the source server.
The LKM obtains a set of pre-transformed records from the source server by executing the appropriate transformations on the source. Usually, this is done by a single SQL SELECT query when the source server is an RDBMS. When the source doesn't have SQL capacities (such as flat files or applications), the LKM simply reads the source data with the appropriate method (read file or execute API).
The LKM loads the records into the "C$" table of the staging area.
An interface may require several LKMs when it uses datastores from different sources. When all source datastores are on the same data server as the staging area, no LKM is required.
Integration Knowledge Modules (IKM):
The IKM is in charge of writing the final, transformed data to the target table. Every interface uses a single IKM. When the IKM is started, it assumes that all loading phases for the remote servers have already carried out their tasks. This means that all remote source data sets have been loaded by LKMs into "C$" temporary tables in the staging area, or the source datastores are on the same data server as the staging area. Therefore, the IKM simply needs to execute the "Staging and Target" transformations, joins and filters on the "C$" tables, and tables located on the same data server as the staging area. The resulting set is usually processed by the IKM and written into the "I$" temporary table before loading it to the target. These final transformed records can be written in several ways depending on the IKM selected in your interface. They may be simply appended to the target, or compared for incremental updates or for slowly changing dimensions. There are 2 types of IKMs: those that assume that the staging area is on the same server as the target datastore, and those that can be used when it is not.
When the staging area is on the target server, the IKM usually follows these steps:
The IKM executes a single set-oriented SELECT statement to carry out staging area and target declarative rules on all "C$" tables and local tables (such as D in the figure). This generates a result set.
Simple "append" IKMs directly write this result set into the target table. More complex IKMs create an "I$" table to store this result set.
If the data flow needs to be checked against target constraints, the IKM calls a CKM to isolate erroneous records and cleanse the "I$" table.
The IKM writes records from the "I$" table to the target following the defined strategy (incremental update, slowly changing dimension, etc.).
The IKM drops the "I$" temporary table.
Optionally, the IKM can call the CKM again to check the consistency of the target datastore.
These types of KMs do not manipulate data outside of the target server. Data processing is set-oriented for maximum efficiency when performing jobs on large volumes.
When the staging area is different from the target server,
The IKM executes a single set-oriented SELECT statement to carry out declarative rules on all "C$" tables and tables located on the staging area (such as D in the figure). This generates a result set.
The IKM loads this result set into the target datastore, following the defined strategy (append or incremental update).
This architecture has certain limitations, such as:
A CKM cannot be used to perform a data integrity audit on the data being processed.
Data needs to be extracted from the staging area before being loaded to the target, which may lead to performance issues.
Journalizing Knowledge Modules (JKM):
JKMs create the infrastructure for Change Data Capture on a model, a sub model or a datastore. JKMs are not used in interfaces, but rather within a model to define how the CDC infrastructure is initialized. This infrastructure is composed of a subscribers table, a table of changes, views on this table and one or more triggers or log capture programs.
Service Knowledge Modules (SKM):
SKMs are in charge of creating and deploying data manipulation Web Services to your Service Oriented Architecture (SOA) infrastructure. SKMs are set on a Model. They define the different operations to generate for each datastore's web service. Unlike other KMs, SKMs do no generate an executable code but rather the Web Services deployment archive files. SKMs are designed to generate Java code using Oracle Data Integrator's framework for Web Services. The code is then compiled and eventually deployed on the Application Server's containers.