Wednesday, September 7, 2022

ODI 12C - Introduction

Simply if we look at our own house, we can see data is generating everywhere, whereever using Laptop, tab, mobile, PC etc all systems are generating data. Similarly, If we think of an Enterprise that also has different application generating data for different purposes and we need to fetch data from all these applications like ERP, Concur,Saleforce, ADP  Peoplesoft, Lotus notes, Warehouse management system and access etc.. to sync all the systems or analytical reporting purposes. Before ODI comes into the picture or any informatica or integration tool, the integration of all these system, at the end looks complex. 

With the use of ODI, We can get or integrate the systems very easily and also store all the systems data into a data warehouse system and reports can be easily generated using Analytical tool like FAW.

Using previous versions of ODI, we were doing ETL load(Extract to staging server, Transform and load to Target table).


Unlike ETL tools which need a separate staging server, ODI 12c just need an staging area within target database. As there is no need of staging server, there is considerable savings on infrastructure related expenses. This approach is called ELT load.


If you think of using Informita , no way, BI apps 11g and above version does not work or support with informatica. Existing Oracle BI apps customer will have to move their ETL toll to ODI if they want upgrade to latest version of BI apps.

What is ODI:

  • Data integration involves combining data from several disparate sources which are stored using various technologies and provide a unified view of the data.
  • ODI is a comprehensive data integration platform that covers all the data integration requitements : from high-volume, high- performance batch loads  to event-driven, trickle feed integration.
  • It is an ELT tool(extract, load and transform) used for high speed data movement between disparate systems.
  • ODI was initially called as "Sunopsis data integrator". Oracle acquired Sunopsis in 2006 and then it becomes ODI.


Why Data Inegrator:

For a real life scenario,

We have a complex system landscape, if a management person wants a consolidated reporting from all these applications then that data has to be read individually from each application and then consolidate them which will humongous time and lot of efforts.

Instead, using the ODI we can integrate all the systens data to one place like ina Data warehouse and using warehouse analytic tool, we can go for reports. This approach is scalable and less expensive to maintain.

ODI studio overview /navigators:



ODI Development process flow:


Process flow points:
  • Once ODI install done, Admin goes through the security and reviews ODI users, their profiles and priviledges.
  • Next Admin creates connections between ODI repository and other source and target databases etc.
  • Developer will create the model (reverse engineer metadata) of the connections and maps between source and target objects. We can also use Procedure which is another way to transfer the data from source to target. 
  • Next create package which contains process flow showing sequences in which these procedures and mappings will run.
  • After package is created , next step is to run these packages, to organize all these packages, will create scenario. Scenario is a frozen snapshot of mappings or packages. Once the scenario is created, it can be migrated to different environments without the need of mihrating the related components like packges and mappings.
  • Next, we will create a load plan which describes the hierarchy of steps of scenarios to be executed in series or parallel.
  • Using agents, we can schedule the ODI load plans and scenario.

Data flow architecture:


Flow points:

  • Once, source and target connections and model created, LKM - load knowledge module comes into picture and loads the source data to target staging C$ table.
  • IKM - Integration knowledge module helps to load the that from staging table to target table.
  • CKM - Check knowledge module helps for data constraints check before load to target table.
ODI Repository:
  • Repository is nothing but a relational database containing all the ODI details.
  • In general, they are created using Repository creation utility (RCU).
  • Two main schema in repository are master and work repository (one or multiple).
  • Master repository contains system topology like connections details of the source data, target data and versions of project components, security.
  • Work repository contains metadata for model like source and target tables and project design components like mapping between the source and target tables as well as components like procedures, functions, packages etc.
  • Work repo should always be attached to a Master repo.
  • Work repository further divided in Development and Execution repository.
    • Development repo:
      •  contains all the development object like models, project details, scenarios, load plans as wells as the execution logs.
      • Used for dev environments
    • Execution repo 
      • contains only execution components like scenarios and load plans.
      • Used for prod environments
Agents:
ODI agents are light weight java process that orchestrates the overall data integration process. We can not schedule scenarios and run load plans without agents.

Agent types:
  • Standalone
  • JEE
  • Colocated
For more details on agents follow my below blog:

Admin role vs Developer role // Master vs work Repo:




No comments:

Post a Comment

Featured Post

11g to 12c OSB projects migration points

1. Export 11g OSB code and import in 12c Jdeveloper. Steps to import OSB project in Jdeveloper:   File⇾Import⇾Service Bus Resources⇾ Se...