Thursday, December 18, 2025

OIC -Event-Driven Notification Framework in Oracle Integration Cloud (OIC) Using Apache Kafka

Working...

🧩 Use Case

Enterprises need a reliable, scalable, and asynchronous notification platform that can handle:

  • High-volume events
  • Multiple notification channels (SMS, App, Email)
  • Controlled retries
  • Failure isolation
  • End-to-end observability

The challenge is to integrate Apache Kafka with Oracle Integration Cloud (OIC) while ensuring:

  • Guaranteed message delivery
  • Retry and dead-letter handling
  • Clean separation between orchestration and channel delivery

🎯 Solution Overview

This solution leverages:

  • Apache Kafka as the event backbone
  • Oracle Integration Cloud (OIC) for orchestration
  • Retry Topic + Dead Letter Topic (DLT) pattern
  • Channel-specific child integrations
  • External notification providers

🏗️ High-Level Architecture

Core Components

  • Kafka Topics
    • notification-main-topic
    • notification-retry-topic
    • notification-dead-topic
  • OIC Integrations
    • Publisher Integration
    • Main Consumer Integration
    • Retry Consumer Integration
    • Child Channel Integrations (SMS, App, Email)

🔄 End-to-End Flow

1️⃣ Event Publishing to Kafka

  • Source system (CCS / upstream app) sends notification request to OIC.
  • OIC Publisher Integration publishes the event to:
    • Kafka Main Topic

OIC connects to Kafka via REST Proxy or OCI Service Connector / API Gateway (depending on deployment).


2️⃣ Kafka Main Topic Consumption

  • OIC Main Consumer Integration consumes messages from:
    • notification-main-topic
  • Performs:
    • Payload validation
    • Header enrichment (correlationId, retryCount)

3️⃣ Notification Orchestration

The OIC Main Service Orchestrator:

  • Identifies notification type
  • Routes to the correct child integration:
    • SMS
    • App Push
    • Email

4️⃣ Channel-Specific Child Integrations

Child Integration External System
OIC-Child-SMS Twilio
OIC-Child-App Azure
OIC-Child-Email SendGrid

Each child:

  • Invokes external provider
  • Returns success or failure response

5️⃣ Retry Handling with Kafka Topics

If processing fails:

  • Message is published to Kafka Retry Topic
  • retryCount is incremented
  • Optional delay logic applied (consumer-side or scheduler-based)

Retry Consumer Integration:

  • Consumes from notification-retry-topic
  • Reprocesses message
  • Republishes to Main Topic if retry count < max

6️⃣ Dead Letter Topic (DLT) Handling

When retry count exceeds threshold:

  • Message is published to:
    • notification-dead-topic
  • Dead messages are:
    • Logged to Datadog
    • Stored for audit and manual reprocessing

🔁 Retry Logic Summary

Stage Action
Main Topic Failure Send to Retry Topic
Retry Count < Max Reprocess
Retry Count = Max Send to Dead Topic
Dead Topic Monitoring + Alerting

📊 Design Diagram (Kafka + OIC)

Source System (CCS)
        |
        v
   OIC Publisher
        |
        v
+-----------------------+
| Kafka Main Topic      |
| notification-main     |
+-----------------------+
        |
        v
 OIC Main Consumer
        |
        v
 OIC Main Orchestrator
   |        |        |
   |        |        |
  SMS      APP     EMAIL
   |        |        |
 Twilio   Azure  SendGrid
   |
Failure?
   |
   v
+-----------------------+
| Kafka Retry Topic     |
| notification-retry    |
+-----------------------+
        |
        v
 OIC Retry Consumer
        |
Retry < Max?
   |        |
  Yes      No
   |        |
   v        v
Main     Kafka Dead Topic
Topic    notification-dead
             |
             v
          Datadog

🛠️ Kafka Design Best Practices

✅ Separate topics for main, retry, and dead messages
✅ Include retryCount and correlationId in headers
✅ Use consumer groups for scalability
✅ Idempotent processing in OIC
✅ Externalized retry thresholds
✅ Centralized logging and monitoring


📌 OIC Integration Design Notes

  • Use App-Driven Orchestration for consumers
  • Use Stage File or ATP for temporary persistence (optional)
  • Secure Kafka via:
    • SASL / SSL
    • API Gateway
  • Use Tracking ID for end-to-end visibility

🚀 Benefits of Kafka-Based Design

✔ High throughput & durability
✔ Decoupled producers and consumers
✔ Controlled retries without message loss
✔ Cloud-agnostic architecture
✔ Easy scaling across consumers


🔚 Conclusion

By integrating Apache Kafka with Oracle Integration Cloud, organizations can build a resilient, scalable, and fault-tolerant notification framework that supports enterprise-grade messaging with full observability and retry control.


Tuesday, December 9, 2025

Microsoft Excel - Working with Large sets of excel data

📘 Microsoft Excel – Working with Large Sets of Data

Managing large datasets in Excel becomes easier when you use the right tools. Below are the most important features to improve navigation, analysis, and printing.


1️⃣ Freeze Panes – Keep Headers Visible

Freeze Panes helps you lock specific rows or columns so they remain visible while scrolling through large files.

How to Use

  • Freeze Top Row
    View → Freeze Panes → Freeze Top Row
  • Freeze First Column
    View → Freeze Panes → Freeze First Column
  • Freeze Specific Rows & Columns
    1. Select the cell below the row and right of the column you want to freeze
    2. Go to View → Freeze Panes

Use Case

When scrolling through thousands of records, the headers stay visible, making data easier to understand.




2️⃣ Grouping Data (Rows or Columns)

Grouping helps collapse and expand sections of data—ideal for large sheets with repeated categories.

How to Use

  1. Select the rows or columns you want to group
  2. Go to Data → Group
  3. Use the + / – signs to expand or collapse

Use Case

  • Group rows of monthly data under quarters
  • Hide detail rows and show only the summary
  • Manage long reports neatly


3️⃣ Print Options for Large Data Sets

Printing large worksheets can be messy unless properly formatted.

Key Print Tools

  • Print Titles: Repeat header rows on every page
    Page Layout → Print Titles → Rows to Repeat at Top
  • Fit Sheet on One Page
    File → Print → Scaling → Fit Sheet on One Page
  • Set Print Area
    Page Layout → Print Area → Set Print Area
  • Page Break Preview
    View → Page Break Preview to adjust how pages split

Use Case

Ensures large reports print cleanly with titles and without cutting columns.



4️⃣ Linking Worksheets (3D Formulas)

3D formulas allow calculations across multiple worksheets at once.

How to Use

Example: Sum cell B2 from Sheet1 to Sheet5:

=SUM(Sheet1:Sheet5!B2)

Use Case

Perfect for:

  • Monthly data sheets
  • Summarizing identical layouts (Jan–Dec)
  • Creating dashboards pulling from multiple sheets


5️⃣ Consolidating Data from Multiple Worksheets

Combine data from multiple sheets into one summary sheet.

How to Use

  1. Go to Data → Consolidate
  2. Choose function (Sum, Count, Average, etc.)
  3. Add references from multiple sheets
  4. Select “Top row” and “Left column” labels if needed
  5. Click OK

Use Case

  • Combine monthly sales into yearly summary
  • Merge repeated formats from different teams or regions
  • Build master reports automatically




🎯 Summary Table

Feature Purpose
Freeze Panes Keep headers visible while scrolling
Grouping Collapse/expand large sections of data
Print Options Print large sheets cleanly and professionally
3D Formulas Use formulas across multiple sheets
Consolidate Merge data from several sheets into one


Monday, December 8, 2025

OIC – Unified File Reprocessing Framework Using Scheduler Flag, Dual SFTP Sources, Child Integration, and Centralized Error Handling

📌 Use Case

Business processes often require file reprocessing and fresh processing from two different locations:

  1. OIC SFTP → for reprocessing previously failed files
  2. Vendor SFTP → for processing newly delivered source files

To manage both scenarios within a single automated pipeline, we use a Scheduler flag, a central Main Integration, and a dedicated Child Integration. A unified Global Fault Handler sends errors to CCS, logs them into Datadog, and moves failed files to an error directory.

This ensures stable, recoverable, and auditable file processing within OIC.


🛠️ Solution Steps


1️⃣ Scheduler Integration – Control the Processing Mode

The scheduler integration triggers the main service and passes a flag:

filePresentInOICSFTP = Y

  • System should process files from OIC SFTP
  • This is used for reprocessing failed files

filePresentInOICSFTP = N

  • System should process files from Vendor SFTP
  • This handles fresh daily load

This flag enables dynamic routing and avoids maintaining separate integrations for reprocessing.


2️⃣ Main Integration – File Discovery & Preparation

The main service behaves differently based on the flag.


✔️ A. If filePresentInOICSFTP = Y (Reprocessing Mode)

  • List files from OIC SFTP
  • For each file found:
    • Call the Child Integration
    • Pass file name + path + flag

No need to download or create new files—direct reprocessing.


✔️ B. If filePresentInOICSFTP = N (Fresh Run Mode)

  • List files from Vendor SFTP
  • For each file:
    • Download file from vendor SFTP
    • Write the file to OIC SFTP for internal processing
    • Call the Child Integration, passing the newly stored file name

This ensures all files—new or old—are processed uniformly through OIC SFTP.


3️⃣ Child Integration – Validation & File Handling

This integration performs the actual business logic.


✔️ Step A: Retrieve File Reference

Since files are stored in OIC SFTP in both modes:

  • Use File Server Action → Read File
  • Generate file reference for reading & parsing

✔️ Step B: Read File Content

  • Read the contents of the file using file reference
  • Parse the data based on flat file layout

✔️ Step C: Business Validation

Apply the core rule:

The required field must contain “00”.

If the condition fails:

  • Use Throw New Fault
    • This forces execution of the Global Fault Handler
    • Ensures controlled, trackable exceptions

✔️ Step D: Delete Source File

After reading:

  • Delete file from OIC SFTP

If file was originally from vendor SFTP:

  • The vendor SFTP file is already cleaned earlier by the main service
  • (or can be cleaned here based on your design)

This avoids duplicate processing.


4️⃣ Global Fault Handler – Centralized Error Management

When the child flow throws an exception:


✔️ A. Create Case in CCS (TO Module)

  • Send error details and file metadata
  • Trigger TO creation for functional follow-up

✔️ B. Log Errors to Datadog

Log structured fields:

  • File name
  • Validation failure message
  • Timestamp
  • Integration instance ID
  • Severity

This ensures observability and alerting.


✔️ C. Move the File to Error Directory

  • Take the original payload/file
  • Move it to OIC SFTP Error Folder
  • Prevents accidental deletion of faulty files
  • Allows manual reprocessing if needed

✔️ Final Outcome

This end-to-end design enables:

  • Single unified pipeline for fresh processing + reprocessing
  • Smart routing based on Scheduler flag
  • Guaranteed file availability in OIC SFTP for consistent logic
  • Strong validation using controlled Throw New Fault
  • Automatic integration with CCS for error cases
  • Observability via Datadog
  • Robust error-handling and file preservation
  • Minimal manual intervention

A complete, enterprise-grade framework for file ingestion & reprocessing in OIC.


Sunday, December 7, 2025

OIC – Ensuring Accurate Status Reporting in Real-Time Sync Integrations Using track_var_3

📌 Use Case

In real-time (synchronous) integrations, we noticed an issue where the OIC dashboard shows the instance status as Succeeded even though the internal flow actually failed.
This mismatch creates confusion for support teams and impacts monitoring because synchronous integrations do not properly propagate faults in certain failure scenarios.

To fix this, we implemented a custom tracking mechanism using track_var_3, where we manually mark the integration as SUCCESS or ERROR depending on the actual execution of the full flow.


🛠️ Solution Steps

1️⃣ Identify the Problem Scenario

  • The integration appears as Succeeded in the monitoring dashboard.
  • But inside the flow, some actions failed (for example: failed DB operation, REST error, or transformation issue).
  • No global fault was triggered, so OIC still treated it as a success.

2️⃣ Introduce a Tracking Variable (track_var_3)

Use the built-in tracking variables of OIC.

  • Reserve track_var_3 exclusively for storing the final status of the integration.
  • It will be updated as:
    • "SUCCESS" → when the full flow is executed without issues
    • "ERROR" → when any failure occurs

This variable is visible in monitoring and helps support quickly identify the real outcome.


3️⃣ Assign “SUCCESS” at the End of the Main Flow

At the last step of your primary flow:

  • Add an Assign Action
  • Set:
    track_var_3 = "SUCCESS"
    

This ensures the flow marks itself successful only after all business logic passes.


4️⃣ Handle Failures via Global Fault

In the Global Fault Handler, add:

  • An Assign Action
    track_var_3 = "ERROR"
    

This ensures:

  • Any unhandled fault
  • Any adapter failure
  • Any transformation error
    → Automatically marks the instance as ERROR.


5️⃣ Use the Tracking Variable for Support Monitoring

The support team will now monitor:

  • Instance status (may still show “Succeeded” due to OIC behaviour)
  • track_var_3 value (always accurate)

This provides the actual integration result.


6️⃣ Optional: Return Status to Calling App

In synchronous integrations:

  • Respond back to the caller with
    { "status": track_var_3 }
    

so they also get the correct success/error.


✔️ Final Outcome

With this approach:

  • Even if OIC marks the integration as “Succeeded”,
  • Your tracking variable clearly indicates the correct status
  • Support teams get accurate visibility
  • No missed failures in real-time sync services

Tuesday, December 2, 2025

OIC - Sharing Lookup Values Across Projects Using dvm:sharedLookupValue

Use Case

In many enterprise integrations, multiple projects need to reference the same configuration data such as:

  • SFTP locations
  • Integration IDs
  • File paths
  • Environment-specific values

Maintaining separate lookup files for each project leads to:

  • Data inconsistency
  • High maintenance effort
  • Deployment overhead across environments

To avoid this, we use a Shared Lookup (DVM) that can be accessed across different OIC projects using the built-in function:

dvm:sharedLookupValue()

This ensures centralized configuration management and reusability.


Solution Approach

We store common configuration values in a Shared Lookup (DVM) under one project and access it from another project using the sharedLookupValue function inside an Assign action.

Step 1: Create a Shared Lookup

In the source project:

  • Create a lookup (DVM) named:
    Common_CCS_Interface_SFTP_Lookup
    
  • Example structure:
IntegrationId SourceFileLocation
INT105 /ftp/inbound/path

Step 2: Call Shared Lookup from Another Project

In the target project, use an Assign Action with this expression:

dvm:sharedLookupValue(
  'CUSTOMER_BO2_01',
  'Common_CCS_Interface_SFTP_Lookup',
  'IntegrationId',
  'INT105',
  'SourceFileLocation',
  'NA'
)

Parameter Explanation

Parameter Description
CUSTOMER_BO2_01 Source project code
Common_CCS_Interface_SFTP_Lookup Shared lookup name
IntegrationId Source column
INT105 Source value
SourceFileLocation Target column
NA Default value if no match found
Benefits
  • ✅ Single source of truth
  • ✅ Reduced deployment effort
  • ✅ Easy maintenance
  • ✅ Cross-project reusability
  • ✅ Environment-specific control

Conclusion

Using dvm:sharedLookupValue in Oracle Integration Cloud enables seamless sharing of lookup values across projects, improves governance, and significantly reduces operational overhead. This approach is highly recommended for managing common SFTP paths, integration identifiers, and environment configurations in large OIC programs.








Featured Post

OIC -Event-Driven Notification Framework in Oracle Integration Cloud (OIC) Using Apache Kafka

Working... 🧩 Use Case Enterprises need a reliable, scalable, and asynchronous notification platform that can handle: High-volume events...