Tuesday, September 21, 2021

ODI - MobaXterm and connect to private subnet using jumpbox

 What is MobaXterm:

  • MobaXterm is ultimate and poweful toolbox for remote computing. In a single Windows application, it provides loads of functions that are tailored for programmers, webmasters, IT administrators and pretty much all users who need to handle their remote jobs in a more simple fashion.
  • MobaXterm provides all the important remote network tools (SSH, X11, RDP, VNC, FTP, MOSH, ...) and Unix commands (bash, ls, cat, sed, grep, awk, rsync, ...) to Windows desktop, in a single portable exe file which works out of the box. 

You can choose to create a new SSH, Telnet, Rlogin, RDP, VNC, XDMCP, FTP, SFTP, Mosh oe seria session:

For more details:
https://mobaxterm.mobatek.net/

How to connnect a private subnet using jumpbox:

 Step1: Click on Session and VNC

Step2: Provide remote private subnet host or ip and port details and click Network setting and click ssh gateway


Step3: provide jump box or gateway host details.
Host , port, username, use ssh key(private key)


Step4: go to bookmark setting and provide a session name and ok


Step5: provide VNC password



Note: to get access to the private subnet or jumpbox, you have do the following steps to the hosts:

Login to that host 
Sudo su - userid and ls -la
Cd ./ssh
Vi authrization_keys and add your public key here and save.

You can generate private and public key pairs with puttygen app.

Friday, September 17, 2021

ODI - mapping - hardcode data and case when in expression

UseCase: Here, we will have two tables. One source table has 2 columns Train id and name and 2nd target table has 3 columbs train id   name and city. We will hardcode city values using hardcode expression and case when expression.

Implementation Steps:

Step1: Project - mappings - new mappings - name as int_hardcode_data

Step2: Open the logical tab of mapping and drag and drop source and target tables. Map the train id and name.

Step3: Select city and open properties tab and write in the expression section: 'London'

Step4: Go to the physical tab and select IKM as IKM SQL insert default.

Select default and select LKM: LKM SQL to SQL

Step5: Save and run with simulation ok. Check the script.

Step6: Run now without simulation ok. Go to the operator tab and check Date today

Step7: Check your target database for records.


For case when:

Step1: Go to the logical tab of mapping and edit the city expression

Write:

Case 

When deafult:Regions.region_id =1 then 'London'

When deafult:Regions.region_id =2 then 'New York'

When deafult:Regions.region_id =3 then 'Tokyo'

When deafult:Regions.region_id =4 then 'Dubai'

END

Save and run again.


ODI - Knowledge module

What is KM:

Knowledge Modules (KMs) are code templates. Each KM is dedicated to an individual task in the overall data integration process. The code in the KMs appears in nearly the form that it will be executed except that it includes Oracle Data Integrator (ODI) substitution methods enabling it to be used generically by many different integration jobs. The code that is generated and executed is derived from the declarative rules and metadata defined in the ODI Designer module.

Each knowledge module contains the knowledge required by ODI to perform a specific set of actions or tasks against a specific technology or set of technologies such as connecting to this technology, extracting data from it, transforming the data, checking it, integrating it etc.


  • A KM will be reused across several interfaces or models. To modify the behavior of hundreds of jobs using hand-coded scripts and procedures, developers would need to modify each script or procedure. In contrast, the benefit of Knowledge Modules is that you make a change once and it is instantly propagated to hundreds of transformations. KMs are based on logical tasks that will be performed. They don't contain references to physical objects (datastores, columns, physical paths, etc.)
  • KMs can be analyzed for impact analysis.
  • KMs can't be executed standalone. They require metadata from interfaces, datastores and models.

KMs fall into 6 different categories as summarized:

Reverse-engineering KM
  • Retrieves metadata to the Oracle Data Integrator work repository
  • Used in models to perform a customized reverse-engineering
Check KM
  • Checks consistency of data against constraints
  • Used in models, sub models and datastores for data integrity audit
  • Used in interfaces for flow control or static control

Loading KM
  • Loads heterogeneous data to a staging area
  • Used in interfaces with heterogeneous sources
Integration KM
  • Integrates data from the staging area to a target
  • Used in interfaces
Journalizing KM
  • Creates the Change Data Capture framework objects in the source staging area
  • Used in models, sub models and datastores to create, start and stop journals and to register subscribers.
Service KM
  • Generates data manipulation web services
  • Used in models and datastores

Details of each type of KMs:

Reverse Engineering KM:

The RKM role is to perform customized reverse engineering for a model. The RKM is in charge of connecting to the application or metadata provider then transforming and writing the resulting metadata into Oracle Data Integrator's repository. The metadata is written temporarily into the SNP_REV_xx tables. The RKM then calls the Oracle Data Integrator API to read from these tables and write to Oracle Data Integrator's metadata tables of the work repository in incremental update mode. 


A typical RKM follows these steps:

Cleans up the SNP_REV_xx tables from previous executions using the OdiReverseResetTable tool.

Retrieves sub models, datastores, columns, unique keys, foreign keys, conditions from the metadata provider to SNP_REV_SUB_MODEL, SNP_REV_TABLE, SNP_REV_COL, SNP_REV_KEY, SNP_REV_KEY_COL, SNP_REV_JOIN, SNP_REV_JOIN_COL, SNP_REV_COND tables.

Updates the model in the work repository by calling the OdiReverseSetMetaData tool.

Check Knowledge Modules (CKM):


The CKM is in charge of checking that records of a data set are consistent with defined constraints. The CKM is used to maintain data integrity and participates in the overall data quality initiative.

 The CKM can be used in 2 ways:

  • To check the consistency of existing data. This can be done on any datastore or within interfaces, by setting the STATIC_CONTROL option to "Yes". In the first case, the data checked is the data currently in the datastore. In the second case, data in the target datastore is checked after it is loaded.
  • To check consistency of the incoming data before loading the records to a target datastore. This is done by using the FLOW_CONTROL option. In this case, the CKM simulates the constraints of the target datastore on the resulting flow prior to writing to the target.

In summary: the CKM can check either an existing table or the temporary "I$" table created by an IKM.

The CKM accepts a set of constraints and the name of the table to check. It creates an "E$" error table which it writes all the rejected records to. The CKM can also remove the erroneous records from the checked result set.

In STATIC_CONTROL mode, the CKM reads the constraints of the table and checks them against the data of the table. Records that don't match the constraints are written to the "E$" error table in the staging area.

In FLOW_CONTROL mode, the CKM reads the constraints of the target table of the Interface. It checks these constraints against the data contained in the "I$" flow table of the staging area. Records that violate these constraints are written to the "E$" table of the staging area.

In both cases, a CKM usually performs the following tasks:

Create the "E$" error table on the staging area. The error table should contain the same columns as the datastore as well as additional columns to trace error messages, check origin, check date etc.

Isolate the erroneous records in the "E$" table for each primary key, alternate key, foreign key, condition, mandatory column that needs to be checked.

If required, remove erroneous records from the table that has been checked.

Loading Knowledge Modules (LKM):

An LKM is in charge of loading source data from a remote server to the staging area. It is used by interfaces when some of the source datastores are not on the same data server as the staging area. The LKM implements the declarative rules that need to be executed on the source server and retrieves a single result set that it stores in a "C$" table in the staging area.

The LKM creates the "C$" temporary table in the staging area. This table will hold records loaded from the source server.

The LKM obtains a set of pre-transformed records from the source server by executing the appropriate transformations on the source. Usually, this is done by a single SQL SELECT query when the source server is an RDBMS. When the source doesn't have SQL capacities (such as flat files or applications), the LKM simply reads the source data with the appropriate method (read file or execute API).

The LKM loads the records into the "C$" table of the staging area.

An interface may require several LKMs when it uses datastores from different sources. When all source datastores are on the same data server as the staging area, no LKM is required.

Integration Knowledge Modules (IKM):

The IKM is in charge of writing the final, transformed data to the target table. Every interface uses a single IKM. When the IKM is started, it assumes that all loading phases for the remote servers have already carried out their tasks. This means that all remote source data sets have been loaded by LKMs into "C$" temporary tables in the staging area, or the source datastores are on the same data server as the staging area. Therefore, the IKM simply needs to execute the "Staging and Target" transformations, joins and filters on the "C$" tables, and tables located on the same data server as the staging area. The resulting set is usually processed by the IKM and written into the "I$" temporary table before loading it to the target. These final transformed records can be written in several ways depending on the IKM selected in your interface. They may be simply appended to the target, or compared for incremental updates or for slowly changing dimensions. There are 2 types of IKMs: those that assume that the staging area is on the same server as the target datastore, and those that can be used when it is not. 


When the staging area is on the target server, the IKM usually follows these steps:

The IKM executes a single set-oriented SELECT statement to carry out staging area and target declarative rules on all "C$" tables and local tables (such as D in the figure). This generates a result set.

Simple "append" IKMs directly write this result set into the target table. More complex IKMs create an "I$" table to store this result set.

If the data flow needs to be checked against target constraints, the IKM calls a CKM to isolate erroneous records and cleanse the "I$" table.

The IKM writes records from the "I$" table to the target following the defined strategy (incremental update, slowly changing dimension, etc.).

The IKM drops the "I$" temporary table.

Optionally, the IKM can call the CKM again to check the consistency of the target datastore.

These types of KMs do not manipulate data outside of the target server. Data processing is set-oriented for maximum efficiency when performing jobs on large volumes.


When the staging area is different from the target server, 


The IKM executes a single set-oriented SELECT statement to carry out declarative rules on all "C$" tables and tables located on the staging area (such as D in the figure). This generates a result set.

The IKM loads this result set into the target datastore, following the defined strategy (append or incremental update).

This architecture has certain limitations, such as:

A CKM cannot be used to perform a data integrity audit on the data being processed.

Data needs to be extracted from the staging area before being loaded to the target, which may lead to performance issues.

Journalizing Knowledge Modules (JKM):

JKMs create the infrastructure for Change Data Capture on a model, a sub model or a datastore. JKMs are not used in interfaces, but rather within a model to define how the CDC infrastructure is initialized. This infrastructure is composed of a subscribers table, a table of changes, views on this table and one or more triggers or log capture programs.

Service Knowledge Modules (SKM):

SKMs are in charge of creating and deploying data manipulation Web Services to your Service Oriented Architecture (SOA) infrastructure. SKMs are set on a Model. They define the different operations to generate for each datastore's web service. Unlike other KMs, SKMs do no generate an executable code but rather the Web Services deployment archive files. SKMs are designed to generate Java code using Oracle Data Integrator's framework for Web Services. The code is then compiled and eventually deployed on the Application Server's containers.


Tuesday, September 14, 2021

ODI - File Reading steps

Usecase: Here, we will read a csv file from a file location and then write to oracle db using copied source file data store.

Summary steps:

  1. Know the name/type of file
  2. Preparing the path that ODI can reach that area/machine/sharing folder
  3. Creating physical and logical schema
  4. Creating model after that we can see our file as datastore
  5. Create a target table(with copy paste from the datastore or you can create manually)
  6. Create a map and drag drop these objects into map.
  7. Choosing knowledge modules if nesessary you may need to import new knowledge modules. Path of knowledge modules(ORACLE_HOME/odi/sdk/xml_reference). 
  8. Run

Implementation Steps:

Step1: create source csv file.

File name: sourcedata.csv
Headers: Name,empid,salary
Provide some values.

Step2: upload the file to the correct location .

Here using localhost location as /home/oracle/Documents

Step3: Define source topology physical architecture

Go to Topology tab
Texhnologies
File -- new data server -- File_Generic(name)
Keep same jdbc driver details.
In the definition section provide the connections user and password and save -- ok with local(no agent)

On the created data server (File_Generic)-- new physical schema
In the definition area, privide the directory name for both schema and work schema.

Go to the context area and save it. It will show as empty.

Step4: Define source topology logical architecture

File -- new logical schema -- name(LGC_test)
Provide context values for following
Development
Global
Prod
Test

Link the created physical schema

Step5: Define source design model

Go to the design tab and create new model folder as file and then create new model as MDL_File
Provide following definition details:
Name: MDL_FILE
Technology: file
Logical schema: LGC_test

Do reverse enggineering and save

File -- MDL_FILE -- create new data store
Name : sourcedata
Resourcename: sourcedata.csv
UNDER files:
Format: delimited
Heading(number of lines): 2
Field separator: other: ,

Go to attributes and do reverae engg. It will populate all the csv fields. Save and view data.

Step6: preparing the target table.

Copy the created souredata datastore to another model for exaple HR model.(it has a oracle db connection)

Open that copied datastore and change the details like name ans resourcename s fileone.

Save

Step7: Mapping from source to target

Create a new project as file_projects
Create new mapping as map_file_import
Drag and drop both the datastores in the map area.
Link these 2 tables and map the fields. You can automap or also can do manually.

Save.

From mapping area, go to the physical tab and change the loading knowledge module to LKM file to oracle(Builtin) Global

In the Integration knowledge module, make true for create target table and truncate target table

Note: optionally you can import knowledge modules from project knowledge module section.


Oracle Data Integrator all in one page links

12c ODI links below:

  1. ODI 12c Introduction
  2. Overview of Oracle Data Integrator Topology components
  3. ODI - how to add data servers and schemas in Topology
  4. Creating and using data models and datastores
  5. Creating the master work repository schema
  6. ODI type1 ODI 12c installation steps
  7. ODI type2 quickstart installation with virtual box
  8. ODI file reading steps
  9. odi-knowledge-module
  10. ODI mapping hardcode data and use case when with exppression
  11. ODI - MobaXterm and Connect to private subnet from jumphost
  12. ODI - Create scenario under package
  13. ODI - Types of variables and its uses
  14. odi-define-variable-and-use-in-package
  15. ODI - Create a date variable using sysdate and use it in a package
  16. ODI - Lookup component
  17. odi-join-and-filter-components
  18. ODI - about load plans
  19. ODI - ELT vs ETL
  20. ODI - agents
  21. ODI - distinct component
  22. ODI - Use of sequence object in surrogate key
  23. ODI - split component
  24. ODI - aggregate component
  25. ODI - Sets union operation
  26. ODI - Join component
  27. ODI - Sort component
  28. ODI static vs flow control
  29. ODI - staging area and different temporary tables
  30. ODI - subquery filter component
  31. ODI - expression component
  32. ODI - Reusable mappings
  33. ODI - Datafile check used and allowed max size
  34. ODI - steps to increase overall adw size.
  35. odi-12c-stop-and-start-odi-instance from oci console
  36. odi-12c-scheduled-jobs-and-load-plans ran twice issue
  37. odi-changing-odi-schedule
  38. odi-steps-to-restart-odi-agent
  39. odi-sql-query-to-monitor-temp-size
  40. odi-11g-and-12c-architecture
  41. odi-12c-standalone-and-j2ee-agent
  42. odi-12c-ckm-with-flow-control-nd-static control and recycle errors
  43. odi-12c-cdc-implementation-using-jkm
  44. odi-12c-manually-run-load-plan
  45. odi-12c-procedure-and-options
  46. odi-12c-database-free-space-check-sql
  47. odi-activate-or-deactivate-your schedule on load plan
  48. odi-12c-loading-flat-file-to-sql-table
  49. odi-12c-loading-xml-file-to-sql-table
  50. odi-12c-use-unique-temporary-object-name
  51. odi-import-ootb-knowledge-modules
  52. odi-12c-model-creations-with-constraints
  53. odi-12c-scheduling-odi-load-plans
  54. odi-create-model-and-topology-objects using wizard
  55. odi-move-file-using-odifilemove-command
  56. odi-security-component
  57. odi-oracle-12c-database-installation
  58. ODI - Import and export
  59. odi-topology-steps-and-model-creations
  60. ODI- how to track load plan step package scenario behind code from session id
  61. odi-db-query-to-check-sessions-status
  62. ODI read fixed length file and insert or append to database
  63. ODI read a CSV file and insert to database
  64. odi ikm oracle incremental update vs IKM SQL control append
  65. ODI - Why ODI has two J$ views with CDC and Journalizing
  66. ODI 12C - Write file names from OS location to database
  67. ODI 12C - Importing flat file to ODI studio
  68. ODI 12c - Read Multiple files and store them in database table
  69. ODI12c - Why the set variable increment paremeter is disbaled in package?
  70. ODI12C - How to open ODI studio from console or terminal
  71. Odi 12c - How to clean cache to start odi studio from console
  72. ODI 12c - Buffer too small for CLOB to CHAR or BLOB to RAW conversion

Monday, September 13, 2021

ODI - Type2 - Quick start Installation with Virtual box

About ODI 12c VM:

Followings are available with the installation:

  • ODI 12c on Oracle Linux Platform
  • Source Application on Oracle Database
  • Source File System
  • Target application on Oracle Database
  • ODI 12c studio
    • Topology created for Source and Target databases
    • Models reverse Engineered
    • Knowledge modules added to the project
    • Pre existing mappings

Steps to download VM and install ODI 12c appliances:

Step1: Go to google and search with ODI 12c Demo download. You will get the following link:

https://www.oracle.com/middleware/technologies/data-integrator/odi-demo-downloads.html

Download the Oracle VM virtual box

Download ODI 12c Getting-VM.7z all 5 files and saved in a folder


Also, Download the ODI 12c Getting VM Installation Guide for more help.


Step2: Install Oracle Virtual box

Step3: Now unzip the downloaded 1st file to get ova file.

Step4: Import the ova file to virtual box manager

Step6: once import done, start the linux box and open ODI 12c studio.

Step7: Connect to Repository and provide wallet password as welcome1 and provide connection details.

Network Configuration:

In default setup, from Virtual machine, we can't connect internet directly. We have to manually configure VM to Host machine to internet.

Steps:

Virtual box File >> preferences >> Network >> add network >> ok 

Settings>>Network >> select Adapter1 >> Enable network adapter (NAT Network)>> select Adaptet2 >> Enable network adapter(Bridged adapter) >> ok 

Then start the odi 12c from VM and check internet from any internet browser.


Shared Folder between Virtual box and Host Machine:

Create a folder in host machine for example, VM Share in C drive

Go to ODI 12c app >> Devices >> shared Folders >> Shared Folders settings>> Shared Folders >> access + bitton >>choose created host machine folder VM Share >> choose Auto mount and make permanent options(folder name will show as VM_Share) >>ok

Open a terminal (/home/oracle)>> create a linux folder ( mkdir Share) >>cd Share >> chmod 777 /home/oracle/Share

su root

Password: oracle

mount -t vboxsf VM_Share /home/oracle/Share


Notes:

1. You can also download Virtual box from the below site:

https://virtualbox.org/wiki/Downloads

2. You can also download 7zip from below link:

https://www.7-zip.org/download.html


Saturday, September 11, 2021

ODI - Type1- Installation of ODI 12c

Installation needs following 2 components:

  • Search ODI in google and download the software disk1. The latest version is now Oracle Data Integrator 12c (12.2.1.4.0).
  • JDK version 1.8

Simple steps:

Step1: Open CMD in admin mode.

Step2: Go to the Java bin path like "C:\Program Files\Java\jdk1.8.0_51\bin"

Step3: java - jar  <ODI file path>

ODI file path like C:\Users\Srinanda\Desktop\fmw_12.2.1.4.0_odi_Disk1_1of1\fmw*.jar

Step4: Installer will start. Choose Standard or Enterprise

Next

Install

Step5: Once Installation done, connect to repository.

















Featured Post

11g to 12c OSB projects migration points

1. Export 11g OSB code and import in 12c Jdeveloper. Steps to import OSB project in Jdeveloper:   File⇾Import⇾Service Bus Resources⇾ Se...