Friday, September 17, 2021

ODI - Knowledge module

What is KM:

Knowledge Modules (KMs) are code templates. Each KM is dedicated to an individual task in the overall data integration process. The code in the KMs appears in nearly the form that it will be executed except that it includes Oracle Data Integrator (ODI) substitution methods enabling it to be used generically by many different integration jobs. The code that is generated and executed is derived from the declarative rules and metadata defined in the ODI Designer module.

Each knowledge module contains the knowledge required by ODI to perform a specific set of actions or tasks against a specific technology or set of technologies such as connecting to this technology, extracting data from it, transforming the data, checking it, integrating it etc.


  • A KM will be reused across several interfaces or models. To modify the behavior of hundreds of jobs using hand-coded scripts and procedures, developers would need to modify each script or procedure. In contrast, the benefit of Knowledge Modules is that you make a change once and it is instantly propagated to hundreds of transformations. KMs are based on logical tasks that will be performed. They don't contain references to physical objects (datastores, columns, physical paths, etc.)
  • KMs can be analyzed for impact analysis.
  • KMs can't be executed standalone. They require metadata from interfaces, datastores and models.

KMs fall into 6 different categories as summarized:

Reverse-engineering KM
  • Retrieves metadata to the Oracle Data Integrator work repository
  • Used in models to perform a customized reverse-engineering
Check KM
  • Checks consistency of data against constraints
  • Used in models, sub models and datastores for data integrity audit
  • Used in interfaces for flow control or static control

Loading KM
  • Loads heterogeneous data to a staging area
  • Used in interfaces with heterogeneous sources
Integration KM
  • Integrates data from the staging area to a target
  • Used in interfaces
Journalizing KM
  • Creates the Change Data Capture framework objects in the source staging area
  • Used in models, sub models and datastores to create, start and stop journals and to register subscribers.
Service KM
  • Generates data manipulation web services
  • Used in models and datastores

Details of each type of KMs:

Reverse Engineering KM:

The RKM role is to perform customized reverse engineering for a model. The RKM is in charge of connecting to the application or metadata provider then transforming and writing the resulting metadata into Oracle Data Integrator's repository. The metadata is written temporarily into the SNP_REV_xx tables. The RKM then calls the Oracle Data Integrator API to read from these tables and write to Oracle Data Integrator's metadata tables of the work repository in incremental update mode. 


A typical RKM follows these steps:

Cleans up the SNP_REV_xx tables from previous executions using the OdiReverseResetTable tool.

Retrieves sub models, datastores, columns, unique keys, foreign keys, conditions from the metadata provider to SNP_REV_SUB_MODEL, SNP_REV_TABLE, SNP_REV_COL, SNP_REV_KEY, SNP_REV_KEY_COL, SNP_REV_JOIN, SNP_REV_JOIN_COL, SNP_REV_COND tables.

Updates the model in the work repository by calling the OdiReverseSetMetaData tool.

Check Knowledge Modules (CKM):


The CKM is in charge of checking that records of a data set are consistent with defined constraints. The CKM is used to maintain data integrity and participates in the overall data quality initiative.

 The CKM can be used in 2 ways:

  • To check the consistency of existing data. This can be done on any datastore or within interfaces, by setting the STATIC_CONTROL option to "Yes". In the first case, the data checked is the data currently in the datastore. In the second case, data in the target datastore is checked after it is loaded.
  • To check consistency of the incoming data before loading the records to a target datastore. This is done by using the FLOW_CONTROL option. In this case, the CKM simulates the constraints of the target datastore on the resulting flow prior to writing to the target.

In summary: the CKM can check either an existing table or the temporary "I$" table created by an IKM.

The CKM accepts a set of constraints and the name of the table to check. It creates an "E$" error table which it writes all the rejected records to. The CKM can also remove the erroneous records from the checked result set.

In STATIC_CONTROL mode, the CKM reads the constraints of the table and checks them against the data of the table. Records that don't match the constraints are written to the "E$" error table in the staging area.

In FLOW_CONTROL mode, the CKM reads the constraints of the target table of the Interface. It checks these constraints against the data contained in the "I$" flow table of the staging area. Records that violate these constraints are written to the "E$" table of the staging area.

In both cases, a CKM usually performs the following tasks:

Create the "E$" error table on the staging area. The error table should contain the same columns as the datastore as well as additional columns to trace error messages, check origin, check date etc.

Isolate the erroneous records in the "E$" table for each primary key, alternate key, foreign key, condition, mandatory column that needs to be checked.

If required, remove erroneous records from the table that has been checked.

Loading Knowledge Modules (LKM):

An LKM is in charge of loading source data from a remote server to the staging area. It is used by interfaces when some of the source datastores are not on the same data server as the staging area. The LKM implements the declarative rules that need to be executed on the source server and retrieves a single result set that it stores in a "C$" table in the staging area.

The LKM creates the "C$" temporary table in the staging area. This table will hold records loaded from the source server.

The LKM obtains a set of pre-transformed records from the source server by executing the appropriate transformations on the source. Usually, this is done by a single SQL SELECT query when the source server is an RDBMS. When the source doesn't have SQL capacities (such as flat files or applications), the LKM simply reads the source data with the appropriate method (read file or execute API).

The LKM loads the records into the "C$" table of the staging area.

An interface may require several LKMs when it uses datastores from different sources. When all source datastores are on the same data server as the staging area, no LKM is required.

Integration Knowledge Modules (IKM):

The IKM is in charge of writing the final, transformed data to the target table. Every interface uses a single IKM. When the IKM is started, it assumes that all loading phases for the remote servers have already carried out their tasks. This means that all remote source data sets have been loaded by LKMs into "C$" temporary tables in the staging area, or the source datastores are on the same data server as the staging area. Therefore, the IKM simply needs to execute the "Staging and Target" transformations, joins and filters on the "C$" tables, and tables located on the same data server as the staging area. The resulting set is usually processed by the IKM and written into the "I$" temporary table before loading it to the target. These final transformed records can be written in several ways depending on the IKM selected in your interface. They may be simply appended to the target, or compared for incremental updates or for slowly changing dimensions. There are 2 types of IKMs: those that assume that the staging area is on the same server as the target datastore, and those that can be used when it is not. 


When the staging area is on the target server, the IKM usually follows these steps:

The IKM executes a single set-oriented SELECT statement to carry out staging area and target declarative rules on all "C$" tables and local tables (such as D in the figure). This generates a result set.

Simple "append" IKMs directly write this result set into the target table. More complex IKMs create an "I$" table to store this result set.

If the data flow needs to be checked against target constraints, the IKM calls a CKM to isolate erroneous records and cleanse the "I$" table.

The IKM writes records from the "I$" table to the target following the defined strategy (incremental update, slowly changing dimension, etc.).

The IKM drops the "I$" temporary table.

Optionally, the IKM can call the CKM again to check the consistency of the target datastore.

These types of KMs do not manipulate data outside of the target server. Data processing is set-oriented for maximum efficiency when performing jobs on large volumes.


When the staging area is different from the target server, 


The IKM executes a single set-oriented SELECT statement to carry out declarative rules on all "C$" tables and tables located on the staging area (such as D in the figure). This generates a result set.

The IKM loads this result set into the target datastore, following the defined strategy (append or incremental update).

This architecture has certain limitations, such as:

A CKM cannot be used to perform a data integrity audit on the data being processed.

Data needs to be extracted from the staging area before being loaded to the target, which may lead to performance issues.

Journalizing Knowledge Modules (JKM):

JKMs create the infrastructure for Change Data Capture on a model, a sub model or a datastore. JKMs are not used in interfaces, but rather within a model to define how the CDC infrastructure is initialized. This infrastructure is composed of a subscribers table, a table of changes, views on this table and one or more triggers or log capture programs.

Service Knowledge Modules (SKM):

SKMs are in charge of creating and deploying data manipulation Web Services to your Service Oriented Architecture (SOA) infrastructure. SKMs are set on a Model. They define the different operations to generate for each datastore's web service. Unlike other KMs, SKMs do no generate an executable code but rather the Web Services deployment archive files. SKMs are designed to generate Java code using Oracle Data Integrator's framework for Web Services. The code is then compiled and eventually deployed on the Application Server's containers.


Tuesday, September 14, 2021

ODI - File Reading steps

Usecase: Here, we will read a csv file from a file location and then write to oracle db using copied source file data store.

Summary steps:

  1. Know the name/type of file
  2. Preparing the path that ODI can reach that area/machine/sharing folder
  3. Creating physical and logical schema
  4. Creating model after that we can see our file as datastore
  5. Create a target table(with copy paste from the datastore or you can create manually)
  6. Create a map and drag drop these objects into map.
  7. Choosing knowledge modules if nesessary you may need to import new knowledge modules. Path of knowledge modules(ORACLE_HOME/odi/sdk/xml_reference). 
  8. Run

Implementation Steps:

Step1: create source csv file.

File name: sourcedata.csv
Headers: Name,empid,salary
Provide some values.

Step2: upload the file to the correct location .

Here using localhost location as /home/oracle/Documents

Step3: Define source topology physical architecture

Go to Topology tab
Texhnologies
File -- new data server -- File_Generic(name)
Keep same jdbc driver details.
In the definition section provide the connections user and password and save -- ok with local(no agent)

On the created data server (File_Generic)-- new physical schema
In the definition area, privide the directory name for both schema and work schema.

Go to the context area and save it. It will show as empty.

Step4: Define source topology logical architecture

File -- new logical schema -- name(LGC_test)
Provide context values for following
Development
Global
Prod
Test

Link the created physical schema

Step5: Define source design model

Go to the design tab and create new model folder as file and then create new model as MDL_File
Provide following definition details:
Name: MDL_FILE
Technology: file
Logical schema: LGC_test

Do reverse enggineering and save

File -- MDL_FILE -- create new data store
Name : sourcedata
Resourcename: sourcedata.csv
UNDER files:
Format: delimited
Heading(number of lines): 2
Field separator: other: ,

Go to attributes and do reverae engg. It will populate all the csv fields. Save and view data.

Step6: preparing the target table.

Copy the created souredata datastore to another model for exaple HR model.(it has a oracle db connection)

Open that copied datastore and change the details like name ans resourcename s fileone.

Save

Step7: Mapping from source to target

Create a new project as file_projects
Create new mapping as map_file_import
Drag and drop both the datastores in the map area.
Link these 2 tables and map the fields. You can automap or also can do manually.

Save.

From mapping area, go to the physical tab and change the loading knowledge module to LKM file to oracle(Builtin) Global

In the Integration knowledge module, make true for create target table and truncate target table

Note: optionally you can import knowledge modules from project knowledge module section.


Oracle Data Integrator all in one page links

12c ODI links below:

  1. ODI 12c Introduction
  2. Overview of Oracle Data Integrator Topology components
  3. ODI - how to add data servers and schemas in Topology
  4. Creating and using data models and datastores
  5. Creating the master work repository schema
  6. ODI type1 ODI 12c installation steps
  7. ODI type2 quickstart installation with virtual box
  8. ODI file reading steps
  9. odi-knowledge-module
  10. ODI mapping hardcode data and use case when with exppression
  11. ODI - MobaXterm and Connect to private subnet from jumphost
  12. ODI - Create scenario under package
  13. ODI - Types of variables and its uses
  14. odi-define-variable-and-use-in-package
  15. ODI - Create a date variable using sysdate and use it in a package
  16. ODI - Lookup component
  17. odi-join-and-filter-components
  18. ODI - about load plans
  19. ODI - ELT vs ETL
  20. ODI - agents
  21. ODI - distinct component
  22. ODI - Use of sequence object in surrogate key
  23. ODI - split component
  24. ODI - aggregate component
  25. ODI - Sets union operation
  26. ODI - Join component
  27. ODI - Sort component
  28. ODI static vs flow control
  29. ODI - staging area and different temporary tables
  30. ODI - subquery filter component
  31. ODI - expression component
  32. ODI - Reusable mappings
  33. ODI - Datafile check used and allowed max size
  34. ODI - steps to increase overall adw size.
  35. odi-12c-stop-and-start-odi-instance from oci console
  36. odi-12c-scheduled-jobs-and-load-plans ran twice issue
  37. odi-changing-odi-schedule
  38. odi-steps-to-restart-odi-agent
  39. odi-sql-query-to-monitor-temp-size
  40. odi-11g-and-12c-architecture
  41. odi-12c-standalone-and-j2ee-agent
  42. odi-12c-ckm-with-flow-control-nd-static control and recycle errors
  43. odi-12c-cdc-implementation-using-jkm
  44. odi-12c-manually-run-load-plan
  45. odi-12c-procedure-and-options
  46. odi-12c-database-free-space-check-sql
  47. odi-activate-or-deactivate-your schedule on load plan
  48. odi-12c-loading-flat-file-to-sql-table
  49. odi-12c-loading-xml-file-to-sql-table
  50. odi-12c-use-unique-temporary-object-name
  51. odi-import-ootb-knowledge-modules
  52. odi-12c-model-creations-with-constraints
  53. odi-12c-scheduling-odi-load-plans
  54. odi-create-model-and-topology-objects using wizard
  55. odi-move-file-using-odifilemove-command
  56. odi-security-component
  57. odi-oracle-12c-database-installation
  58. ODI - Import and export
  59. odi-topology-steps-and-model-creations
  60. ODI- how to track load plan step package scenario behind code from session id
  61. odi-db-query-to-check-sessions-status
  62. ODI read fixed length file and insert or append to database
  63. ODI read a CSV file and insert to database
  64. odi ikm oracle incremental update vs IKM SQL control append
  65. ODI - Why ODI has two J$ views with CDC and Journalizing
  66. ODI 12C - Write file names from OS location to database
  67. ODI 12C - Importing flat file to ODI studio
  68. ODI 12c - Read Multiple files and store them in database table
  69. ODI12c - Why the set variable increment paremeter is disbaled in package?
  70. ODI12C - How to open ODI studio from console or terminal
  71. Odi 12c - How to clean cache to start odi studio from console
  72. ODI 12c - Buffer too small for CLOB to CHAR or BLOB to RAW conversion
  73. ODI - How to find a special chars from csv .gz file



Monday, September 13, 2021

ODI - Type2 - Quick start Installation with Virtual box

About ODI 12c VM:

Followings are available with the installation:

  • ODI 12c on Oracle Linux Platform
  • Source Application on Oracle Database
  • Source File System
  • Target application on Oracle Database
  • ODI 12c studio
    • Topology created for Source and Target databases
    • Models reverse Engineered
    • Knowledge modules added to the project
    • Pre existing mappings

Steps to download VM and install ODI 12c appliances:

Step1: Go to google and search with ODI 12c Demo download. You will get the following link:

https://www.oracle.com/middleware/technologies/data-integrator/odi-demo-downloads.html

Download the Oracle VM virtual box

Download ODI 12c Getting-VM.7z all 5 files and saved in a folder


Also, Download the ODI 12c Getting VM Installation Guide for more help.


Step2: Install Oracle Virtual box

Step3: Now unzip the downloaded 1st file to get ova file.

Step4: Import the ova file to virtual box manager

Step6: once import done, start the linux box and open ODI 12c studio.

Step7: Connect to Repository and provide wallet password as welcome1 and provide connection details.

Network Configuration:

In default setup, from Virtual machine, we can't connect internet directly. We have to manually configure VM to Host machine to internet.

Steps:

Virtual box File >> preferences >> Network >> add network >> ok 

Settings>>Network >> select Adapter1 >> Enable network adapter (NAT Network)>> select Adaptet2 >> Enable network adapter(Bridged adapter) >> ok 

Then start the odi 12c from VM and check internet from any internet browser.


Shared Folder between Virtual box and Host Machine:

Create a folder in host machine for example, VM Share in C drive

Go to ODI 12c app >> Devices >> shared Folders >> Shared Folders settings>> Shared Folders >> access + bitton >>choose created host machine folder VM Share >> choose Auto mount and make permanent options(folder name will show as VM_Share) >>ok

Open a terminal (/home/oracle)>> create a linux folder ( mkdir Share) >>cd Share >> chmod 777 /home/oracle/Share

su root

Password: oracle

mount -t vboxsf VM_Share /home/oracle/Share


Notes:

1. You can also download Virtual box from the below site:

https://virtualbox.org/wiki/Downloads

2. You can also download 7zip from below link:

https://www.7-zip.org/download.html


Saturday, September 11, 2021

ODI - Type1- Installation of ODI 12c

Installation needs following 2 components:

  • Search ODI in google and download the software disk1. The latest version is now Oracle Data Integrator 12c (12.2.1.4.0).
  • JDK version 1.8

Simple steps:

Step1: Open CMD in admin mode.

Step2: Go to the Java bin path like "C:\Program Files\Java\jdk1.8.0_51\bin"

Step3: java - jar  <ODI file path>

ODI file path like C:\Users\Srinanda\Desktop\fmw_12.2.1.4.0_odi_Disk1_1of1\fmw*.jar

Step4: Installer will start. Choose Standard or Enterprise

Next

Install

Step5: Once Installation done, connect to repository.

















ODI - Creating the Master and Work Repository Schemas

Creating the Master and Work Repository Schemas

Before you can configure an Oracle Data Integrator domain, you must create the Master and Work Repository (ODI_REPO) schemas on a certified database for use with this release of Oracle Fusion Middleware.

Configuring a Certified Database

Before you begin, make sure you have installed and configured a certified database


Starting the Repository Creation Utility

Navigate to the ORACLE_HOME/oracle_common/bin directory:

Start RCU.

On UNIX operating systems:

./rcu

On Microsoft Windows operating systems:

rcu.bat

Navigating the Repository Creation Utility Screens to Create the Schema

Start RCU - Click Next.

Select a Repository Creation Method.

If you have permission to perform DBA activities on your database, select System Load and Product Load Concurrently.

Provide the database connection details for RCU to connect to your database. Click Next to proceed, then click OK on the dialog window confirming that connection to the database was successful.

Select Create a new prefix, specify a custom prefix, then select the Oracle Data Integrator schema.

If you are running RCU from a Standalone installation, you will see the following:

Select Oracle Data Integrator and click Next to proceed, then click OK on the dialog window confirming that prerequisite checking for schema creation was successful.

Specify how you want to set the schema passwords on your database, then specify and confirm your passwords.

Specify the custom variables for the Master and Work Repository

Navigate through the remainder of the RCU screens to complete schema creation. When you reach the Completion Summary screen, click Close to dismiss RCU.

Steps with screenshots:





















For  more details: https://docs.oracle.com/middleware/1221/core/ODING/GUID-25AC5AEE-D46D-4E4B-8835-4C1FE32207CC.htm#ODING860

ODI - Creating and Using Data Models and Datastores

 Introduction to Models:

A Model is the description of a set of datastores. It corresponds to a group of tabular data structures stored in a data server. A model is based on a Logical Schema defined in the topology. In a given Context, this Logical Schema is mapped to a Physical Schema. The Data Schema of this Physical Schema contains physical data structure: tables, files, JMS messages, elements from an XML file, that are represented as datastores.

Models as well as all their components are based on the relational paradigm (table, attributes, keys, etc.). Models in Data Integrator only contain Metadata, that is the description of the data structures. They do not contain a copy of the actual data.

Datastores:

A datastore represents a data structure. It can be a table, a flat file, a message queue or any other data structure accessible by Oracle Data Integrator.

A datastore describes data in a tabular structure. Datastores are composed of attributes.

As datastores are based on the relational paradigm, it is also possible to associate the following elements to a datastore:


Keys

A Key is a set of attributes with a specific role in the relational paradigm. Primary and Alternate Keys identify each record uniquely. Non-Unique Indexes enable optimized record access.

References

A Reference is a functional link between two datastores. It corresponds to a Foreign Key in a relational model. For example: The INVOICE datastore references the CUSTOMER datastore through the customer number.

Conditions and Filters

Conditions and Filters are a WHERE-type SQL expressions attached to a datastore. They are used to validate or filter the data in this datastore.

Data Integrity

A model contains constraints such as Keys, References or Conditions, but also non-null flags on attributes. Oracle Data Integrator includes a data integrity framework for ensuring the quality of a data model.

This framework allows to perform:

Static Checks to verify the integrity of the data contained in a data model. This operation is performed to assess the quality of the data in a model when constraints do not physically exist in the data server but are defined in Data Integrator only.

Flow Check to verify the integrity of a data flow before it is integrated into a given datastore. The data flow is checked against the constraints defined in Oracle Data Integrator for the datastore that is the target of the data flow.


Reverse-engineering

A new model is created with no datastores. Reverse-engineering is the process that populates the model in Oracle Data Integrator by retrieving metadata from the data server containing the data structures. There are two different types of reverse-engineering:

Standard reverse-engineering uses standard JDBC driver features to retrieve the metadata. Note that unique keys are not reverse-engineered when using a standard reverse-engineering.

Customized reverse-engineering uses a technology-specific Reverse Knowledge Module (RKM) to retrieve the metadata, using a method specific to the given technology. This method is recommended if a technology specific RKM exists because it usually retrieves more information than the Standard reverse-engineering method.


Creating and Reverse-Engineering a Model

Now that the key components of an ODI model have been described, an overview is provided on how to create and reverse-engineer a model:

Creating a Model

Creating a Model and Topology Objects

Reverse-engineering a Model


Creating a Model

A Model is a set of datastores corresponding to data structures contained in a Physical Schema.

To create a Model:

In Designer Navigator expand the Models panel.

Right-click then select New Model.

Fill in the following fields in the Definition tab:

Name: Name of the model used in the user interface.

Technology: Select the model's technology.

Logical Schema: Select the Logical Schema on which your model will be based.

On the Reverse Engineer tab, select a Context which will be used for the model's reverse-engineering.

Note that if there is only one context that maps the logical schema, this context will be set automatically.

Select Save from the File main menu.

The model is created, but contains no datastore yet.


Creating a Model and Topology Objects

You can create a Model along with other topology objects, including new data servers, contexts and schemas, at the same time:

In ODI Studio, select File and click New....

In the New Gallery dialog, select Create a New Model and Topology Objects and click OK.

The Create New Model and Topology Objects wizard appears.

In the Model panel of the wizard, fill in the following fields.

Name: Name of the model used in the user interface.

Technology: Select the model's technology.

Logical Schema: Select the Logical Schema on which your model will be based.

Context: Select the context that will be used for the model's reverse-engineering.

Click Next.

In the Data Server panel of the wizard, fill in the following data server fields.

Name: Name of the data server, as it appears in the user interface.

Note: or naming data servers, it is recommended to use the following naming standard: <TECHNOLOGY_NAME>_<SERVER_NAME>.

Technology: Technology linked to the data server.

Note: Appears only if the Technology selected for the Model is of the type Generic SQL.

User: User name used for connecting to the data server.

Password: Password linked with the user name.

Note: This password is stored encrypted in the repository.

Driver List: Provides a list of available drivers available to be used with the data server.

Driver: Name of the driver used for connecting to the data server.

URL: Provides the connection details.

Properties: Lists the properties that you can set for the selected driver.

Click Next.

In the Physical Schema panel of the wizard, fill in the physical schema fields.

Name: Name of the physical schema. It is calculated automatically and is read-only.

Datasource (Catalog): Name of the catalog in the data server.

Note: Appears only if the Technology selected supports Catalogs.

Schema (Schema): Name of the schema in the data server. Schema, owner, or library where the required data is stored.

Note: Oracle Data Integrator lists all the schemas present in the data server. Sometimes, Oracle Data Integrator cannot draw up this list. In this case, you should enter the schema name, respecting the case.

Datasource (Work Catalog): Indicate the catalog in which you want to create these objects. For some data validation or transformation operations, Oracle Data Integrator may require work objects to be created.

Note: Appears only if the Technology selected supports Catalogs.

Schema (Work Schema): Indicates the schema in which you want to create these objects. For some data validation or transformation operations, Oracle Data Integrator may require work objects to be created. If you do not set a Work Schema, it defaults to the Schema during execution

It is recommended that you create a specific schema dedicated to any work tables. By creating a schema named SAS or ODI in all your data servers, you ensure that all Oracle Data Integrator activity remains totally independent from your applications.

Note: Oracle Data Integrator lists all the schemas present in the data server. Sometimes, Oracle Data Integrator cannot draw up this list. In this case, you should enter the schema name, respecting the case.

Driver: Name of the driver used for connecting to the data server.

URL: Provides the connection details.

Properties: Specifies the properties for the selected driver.

Click Next.

Click Finish. The model and topology objects are created, but the model contains no datastore yet.


Reverse-engineering a Model

To automatically populate datastores into the model you need to perform a reverse-engineering for this model.


Standard Reverse-Engineering

A Standard Reverse-Engineering uses the capacities of the JDBC driver used to connect the data server to retrieve the model metadata.


To perform a Standard Reverse- Engineering:

In the Reverse Engineer tab of your Model:

Select Standard.

Select the Context used for the reverse-engineering

Select the Types of objects to reverse-engineer. Only object of these types will be taken into account by the reverse-engineering process.

Enter in the Mask field the mask of tables to reverse engineer. The mask selects the objects to reverse. This mask uses the SQL LIKE syntax. The percent (%) symbol means zero or more characters, and the underscore (_) symbol means one character.

Optionally, you can specify the characters to remove for the table alias. These are the characters to delete in order to derive the alias. Note that if the datastores already exist, the characters specified here will not be removed from the table alias. Updating a datastore is not applied to the table alias.

In the Selective Reverse-Engineering tab select Selective Reverse-Engineering, New Datastores, Existing Datastores and Objects to Reverse Engineer.

A list of datastores to be reverse-engineered appears. Leave those you wish to reverse-engineer checked.

Select Save from the File main menu.

Click Reverse Engineer in the Model toolbar menu.

Oracle Data Integrator launches a reverse-engineering process for the selected datastores. A progress bar indicates the progress of the reverse-engineering process.

The reverse-engineered datastores appear under the model node in the Models panel.


Customized Reverse-Engineering:

A Customized Reverse-Engineering uses a Reverse-engineering Knowledge Module (RKM), to retrieve metadata for a specific type of technology and create the corresponding datastore definition in the data model.

For example, for the Oracle technology, the RKM Oracle accesses the database dictionary tables to retrieve the definition of tables, attributes, keys, etc., that are created in the model.


To perform a Customized Reverse-Engineering using a RKM:

In the Reverse Engineer tab of your Model:

Select Customized.

Select the Context used for the reverse-engineering

Select the Types of objects to reverse-engineer. Only object of these types will be taken into account by the reverse-engineering process.

Enter in the Mask the mask of tables to reverse engineer.

Select the KM that you want to use for performing the reverse-engineering process. This KM is typically called RKM <technology>.<name of the project>.

Optionally, you can specify the characters to remove for the table alias. These are the characters to delete in order to derive the alias. Note that if the datastores already exist, the characters specified here will not be removed from the table alias. Updating a datastore is not applied to the table alias.

Click Reverse Engineer in the Model toolbar menu, then Yes to validate the changes.

Click OK.

The Session Started Window appears.

Click OK.

You can review the reverse-engineering tasks in the Operator Navigator. If the reverse-engineering process completes correctly, reverse-engineered datastores appear under the model node in the Models panel.

Featured Post

11g to 12c OSB projects migration points

1. Export 11g OSB code and import in 12c Jdeveloper. Steps to import OSB project in Jdeveloper:   File⇾Import⇾Service Bus Resources⇾ Se...