Wednesday, December 31, 2025

OIC - Understanding Message Pack Consumption in Oracle Integration Cloud (OIC)

Introduction

Oracle Integration Cloud (OIC) licensing is based on message pack consumption, and misunderstanding how messages are billed can lead to unexpected cost overruns. Each integration design choice—payload size, trigger type, file handling, and invoke responses—directly impacts message usage.

This blog explains:

  • How OIC message packs are consumed
  • Key billing rules you must consider
  • Real-world use cases
  • Practical solution steps to optimize message consumption

What Is a Message Pack in OIC?

A message pack represents a predefined number of messages processed by an OIC instance.

Consumption Models

BYOL (Bring Your Own License)

1 pack = 20,000 messages/hour

Non-BYOL

1 pack = 5,000 messages/hour

OIC for SaaS

Metered monthly

1 pack = 1 million messages/month

Each message is calculated in increments of 50 KB.

Core Rules That Impact Message Consumption

1️⃣ Trigger Rule

Every inbound trigger is billed

Payload size is rounded up in 50 KB blocks

Examples:

30 KB → 1 message

70 KB → 2 messages

120 KB → 3 messages

📌 Trigger payload size is the most critical cost driver

2️⃣ Invoke Rule

Invoke request → ❌ Not billed

Invoke response:

≤ 50 KB → ❌ Not billed

50 KB → ✅ Billed in multiples of 50 KB

📌 Large responses from REST, SOAP, BIP, or SaaS APIs increase message usage.

3️⃣ File Server Rule

Applies to scheduled/file-based integrations

Each file > 50 KB is counted as messages

Billing is done in ceil(file_size / 50 KB)

Example:

170 KB file → ceil(170/50) = 4 messages

4️⃣ Internal Calls Rule

Calls within the same OIC instance are not billed

Integration → Integration

Process → Integration

Visual Builder → Integration

⚠️ Calling another OIC instance incurs billing on:

Target instance (always)

Source instance (if response > 50 KB)

Common Real-World Use Cases & Consumption Analysis

Use Case 1: REST Inbound → SaaS Update

Scenario

REST inbound payload: 40 KB

SaaS invoke response: 20 KB

Consumption

Trigger: ceil(40/50) = 1

Response: Not counted

✅ Total = 1 message

Use Case 2: Large REST Inbound Payload

Scenario

REST inbound payload: 120 KB

Consumption

Trigger: ceil(120/50) = 3

✅ Total = 3 messages

Use Case 3: File-Based Scheduled Integration

Scenario

Scheduled job downloads 3 files:

20 KB

40 KB

170 KB

Consumption

Only file > 50 KB is counted

ceil(170/50) = 4 messages

✅ Total = 4 messages

Use Case 4: Database Adapter with Loop Invokes

Scenario

DB adapter pulls 2 rows (20 KB)

REST invoke per row returns 20 KB

Consumption

Trigger: ceil(20/50) = 1

Invoke responses < 50 KB → Not billed

✅ Total = 1 message

Use Case 5: SOAP Trigger + Multiple File Downloads

Scenario

SOAP inbound: 70 KB

Files downloaded:

20 KB

170 KB

40 KB

Consumption

Trigger: ceil(70/50) = 2

File > 50 KB: ceil(170/50) = 4

✅ Total = 6 messages

Solution Steps to Optimize Message Consumption in OIC

Step 1: Minimize Trigger Payload Size

Avoid sending unnecessary fields

Use query parameters instead of body where possible

Use lightweight JSON over SOAP when feasible

Step 2: Control Invoke Response Size

Use REST APIs with selective fields

Avoid returning full object payloads

Use pagination for large datasets

Step 3: Handle Large Files Smartly

Split large files before processing

Filter files before download

Process only delta files instead of full loads

Step 4: Prefer Internal OIC Calls

Reuse integrations within the same OIC instance

Avoid cross-instance integration unless mandatory

Step 5: Use Scheduled Integrations Carefully

Schedule only when needed

Avoid unnecessary polling

Combine multiple lightweight operations into one flow

Step 6: Monitor Usage Regularly

Use Usage Metrics → Billable Messages

Track hourly (OIC) or monthly (OIC for SaaS) usage

Identify spikes caused by large payloads or files

Key Takeaways

  • Trigger payload size is always billed.
  • Invoke responses matter, not requests
  • Files > 50 KB increase cost quickly
  • Good integration design = lower licensing cost

Understanding these rules helps architects and developers design cost-efficient, scalable OIC integrations without compromising functionality.

Reference:

https://docs.oracle.com/en/cloud/paas/integration-cloud/oracle-integration-oci/monitoring-billable-messages.html

https://www.oraclespot.net/2023/05/oracle-integration-cloud-oic.html?m=0

Microsoft Excel - Working with excel's conditional functions

 Working...

1. Microsoft Excel – Working with Excel’s Conditional Functions

Conditional functions perform calculations or return values based on conditions.

Common conditional functions:

  • IF()
  • COUNTIF(), COUNTIFS()
  • SUMIF(), SUMIFS()
  • IFERROR()
  • AND(), OR(), NOT()

These are widely used in reporting, validation, dashboards, and analysis.

2. Working with Excel Name Ranges

A Named Range assigns a meaningful name to a cell or range.

Example:

Instead of:

Copy code

Excel

=SUM(A1:A10)

Use:

Copy code

Excel

=SUM(Sales)

How to create a Named Range:

  • Select the cells
  • Go to Formulas → Define Name
  • Enter the name (no spaces)
  • Click OK

3. Advantages and Disadvantages of Excel Name Ranges

✅ Advantages

Improves readability of formulas

Easier maintenance

Reduces formula errors

Useful in complex models

Works well with conditional functions

❌ Disadvantages

Difficult to manage in very large workbooks

Hidden names can cause confusion

Not always intuitive for beginners

Broken references if ranges are deleted incorrectly

4. Editing an Excel Name Range

To edit a Named Range:

Go to Formulas → Name Manager

Select the name

Click Edit

Change range or name

Click OK

5. Using Excel’s IF() Function

The IF function returns one value if a condition is TRUE, another if FALSE.

Syntax:

Copy code

Excel

=IF(logical_test, value_if_true, value_if_false)

Example:

Copy code

Excel

=IF(A1>=50,"Pass","Fail")

6. Excel’s IF() Function with a Name Range

Assume:

Named Range: Score → A1

Copy code

Excel

=IF(Score>=60,"Qualified","Not Qualified")

This makes formulas easier to understand and reuse.

7. Nesting Functions with Excel

Nesting means using one function inside another.

Example:

Copy code

Excel

=IF(A1>=50, IF(A1>=75,"Distinction","Pass"),"Fail")

8. Nesting Excel’s AND() Function within the IF() Function

Use AND() when multiple conditions must be TRUE.

Syntax:

Copy code

Excel

=IF(AND(condition1, condition2), value_if_true, value_if_false)

Example:

Copy code

Excel

=IF(AND(A1>=50, B1="Yes"), "Eligible", "Not Eligible")

9. Using Excel’s COUNTIF() Function

Counts cells that meet one condition.

Syntax:

Copy code

Excel

=COUNTIF(range, criteria)

Example:

Copy code

Excel

=COUNTIF(A1:A20, ">=60")

Counts students scoring 60 or above.

10. Using Excel’s SUMIF() Function

Adds values that meet a specific condition.

Syntax:

Copy code

Excel

=SUMIF(range, criteria, [sum_range])

Example:

Copy code

Excel

=SUMIF(A1:A10,"Electronics",B1:B10)

Sums sales where category is Electronics.

11. Using Excel’s IFERROR() Function

Handles errors gracefully.

Syntax:

Copy code

Excel

=IFERROR(value, value_if_error)

Example:

Copy code

Excel

=IFERROR(A1/B1, 0)

Prevents #DIV/0! errors.

🔑 Practical Business Use Case

Copy code

Excel

=IFERROR(

   IF(AND(Sales>=Target, Region="East"),"Bonus","No Bonus"),

"No Data")

This combines:

Named Ranges

IF()

AND()

IFERROR()

Monday, December 29, 2025

Microsoft Excel - working with excel power pivot tools

1. Introduction to Excel Power Pivot

Power Pivot is an advanced data modeling and analytics feature in Microsoft Excel that allows you to:

  • Work with large datasets (millions of rows)
  • Combine data from multiple sources
  • Create relationships between tables
  • Use DAX (Data Analysis Expressions) for powerful calculations

Power Pivot turns Excel into a lightweight BI (Business Intelligence) tool.

2. Why Power Pivot?

Power Pivot is used when normal Excel formulas and PivotTables are not enough.

Key Benefits:

  • 🚀 Handles huge data volumes efficiently
  • 🔗 Supports relational data modeling (like databases)
  • 📊 Creates advanced calculations using DAX
  • 🔄 No need for VLOOKUP/XLOOKUP across tables
  • 📈 Enables KPIs, calculated columns, and measures

When to use Power Pivot:

  • Multiple tables (Sales, Customers, Products, Dates)
  • Complex aggregations (YTD, MTD, growth %)
  • Performance issues with traditional Excel

3. Activating the Excel Power Pivot Add-In

Power Pivot is disabled by default.

Steps to enable:

  • Open Excel
  • Go to File → Options
  • Click Add-ins
  • At the bottom, select COM Add-ins → Click Go
  • Check Microsoft Power Pivot for Excel
  • Click OK

👉 A new Power Pivot tab appears in the Excel ribbon.



4. Creating Data Models with Power Pivot

A Data Model is a collection of related tables used for reporting.

Ways to add data:

Import from Excel tables

Import from SQL Server / CSV / Access

Use Power Query → Load to Data Model

Steps:

Open Power Pivot → Manage

Import tables

Clean and shape data (optional via Power Query)

Create Calculated Columns and Measures

📌 Data is stored in compressed, in-memory format for fast performance.


5. Excel Power Pivot Data Model Relationships

Relationships connect tables using common key columns.

Example:

Sales[CustomerID]

Customers[CustomerID]

Relationship Types:

One-to-Many (1:*) – most common

Active vs Inactive relationships

How to create:

Power Pivot → Diagram View

Drag key column from one table to another

Ensure:

Same data type

Unique values on “One” side

✔ Eliminates lookup formulas

✔ Enables cross-table analysis

Option1:


Option 2:



6. Creating PivotTables based on Data Models

PivotTables built on Data Models are more powerful than standard ones.

Steps:

Insert → PivotTable

Choose Use this workbook’s Data Model

Build PivotTable using fields from multiple tables

Advantages:

Use fields from different related tables

Use Measures (DAX) instead of simple sums

Better performance

📊 Example:

Sales Amount by Region and Product Category

Customer-wise revenue trends




7. Excel Power Pivot KPIs

KPIs (Key Performance Indicators) measure performance against targets.

KPI Components:

Base Measure – actual value (e.g., Total Sales)

Target Measure – goal (e.g., Sales Target)

Status – visual indicator (green/yellow/red)

Steps to create KPI:

Create a Measure

Copy code

DAX

Total Sales := SUM(Sales[Amount])

Right-click the measure → Create KPI

Define target and thresholds

Choose icons (traffic lights, arrows)

📌 KPIs are visible in PivotTables and dashboards.










Thursday, December 18, 2025

OIC -Event-Driven Notification Framework in Oracle Integration Cloud (OIC) Using Apache Kafka

Working...

🧩 Use Case

Enterprises need a reliable, scalable, and asynchronous notification platform that can handle:

  • High-volume events
  • Multiple notification channels (SMS, App, Email)
  • Controlled retries
  • Failure isolation
  • End-to-end observability

The challenge is to integrate Apache Kafka with Oracle Integration Cloud (OIC) while ensuring:

  • Guaranteed message delivery
  • Retry and dead-letter handling
  • Clean separation between orchestration and channel delivery

🎯 Solution Overview

This solution leverages:

  • Apache Kafka as the event backbone
  • Oracle Integration Cloud (OIC) for orchestration
  • Retry Topic + Dead Letter Topic (DLT) pattern
  • Channel-specific child integrations
  • External notification providers

🏗️ High-Level Architecture

Core Components

  • Kafka Topics
    • notification-main-topic
    • notification-retry-topic
    • notification-dead-topic
  • OIC Integrations
    • Publisher Integration
    • Main Consumer Integration
    • Retry Consumer Integration
    • Child Channel Integrations (SMS, App, Email)

🔄 End-to-End Flow

1️⃣ Event Publishing to Kafka

  • Source system (CCS / upstream app) sends notification request to OIC.
  • OIC Publisher Integration publishes the event to:
    • Kafka Main Topic

OIC connects to Kafka via REST Proxy or OCI Service Connector / API Gateway (depending on deployment).


2️⃣ Kafka Main Topic Consumption

  • OIC Main Consumer Integration consumes messages from:
    • notification-main-topic
  • Performs:
    • Payload validation
    • Header enrichment (correlationId, retryCount)

3️⃣ Notification Orchestration

The OIC Main Service Orchestrator:

  • Identifies notification type
  • Routes to the correct child integration:
    • SMS
    • App Push
    • Email

4️⃣ Channel-Specific Child Integrations

Child Integration External System
OIC-Child-SMS Twilio
OIC-Child-App Azure
OIC-Child-Email SendGrid

Each child:

  • Invokes external provider
  • Returns success or failure response

5️⃣ Retry Handling with Kafka Topics

If processing fails:

  • Message is published to Kafka Retry Topic
  • retryCount is incremented
  • Optional delay logic applied (consumer-side or scheduler-based)

Retry Consumer Integration:

  • Consumes from notification-retry-topic
  • Reprocesses message
  • Republishes to Main Topic if retry count < max

6️⃣ Dead Letter Topic (DLT) Handling

When retry count exceeds threshold:

  • Message is published to:
    • notification-dead-topic
  • Dead messages are:
    • Logged to Datadog
    • Stored for audit and manual reprocessing

🔁 Retry Logic Summary

Stage Action
Main Topic Failure Send to Retry Topic
Retry Count < Max Reprocess
Retry Count = Max Send to Dead Topic
Dead Topic Monitoring + Alerting

📊 Design Diagram (Kafka + OIC)

Source System (CCS)
        |
        v
   OIC Publisher
        |
        v
+-----------------------+
| Kafka Main Topic      |
| notification-main     |
+-----------------------+
        |
        v
 OIC Main Consumer
        |
        v
 OIC Main Orchestrator
   |        |        |
   |        |        |
  SMS      APP     EMAIL
   |        |        |
 Twilio   Azure  SendGrid
   |
Failure?
   |
   v
+-----------------------+
| Kafka Retry Topic     |
| notification-retry    |
+-----------------------+
        |
        v
 OIC Retry Consumer
        |
Retry < Max?
   |        |
  Yes      No
   |        |
   v        v
Main     Kafka Dead Topic
Topic    notification-dead
             |
             v
          Datadog

🛠️ Kafka Design Best Practices

✅ Separate topics for main, retry, and dead messages
✅ Include retryCount and correlationId in headers
✅ Use consumer groups for scalability
✅ Idempotent processing in OIC
✅ Externalized retry thresholds
✅ Centralized logging and monitoring


📌 OIC Integration Design Notes

  • Use App-Driven Orchestration for consumers
  • Use Stage File or ATP for temporary persistence (optional)
  • Secure Kafka via:
    • SASL / SSL
    • API Gateway
  • Use Tracking ID for end-to-end visibility

🚀 Benefits of Kafka-Based Design

✔ High throughput & durability
✔ Decoupled producers and consumers
✔ Controlled retries without message loss
✔ Cloud-agnostic architecture
✔ Easy scaling across consumers


🔚 Conclusion

By integrating Apache Kafka with Oracle Integration Cloud, organizations can build a resilient, scalable, and fault-tolerant notification framework that supports enterprise-grade messaging with full observability and retry control.


Tuesday, December 9, 2025

Microsoft Excel - Working with Large sets of excel data

📘 Microsoft Excel – Working with Large Sets of Data

Managing large datasets in Excel becomes easier when you use the right tools. Below are the most important features to improve navigation, analysis, and printing.


1️⃣ Freeze Panes – Keep Headers Visible

Freeze Panes helps you lock specific rows or columns so they remain visible while scrolling through large files.

How to Use

  • Freeze Top Row
    View → Freeze Panes → Freeze Top Row
  • Freeze First Column
    View → Freeze Panes → Freeze First Column
  • Freeze Specific Rows & Columns
    1. Select the cell below the row and right of the column you want to freeze
    2. Go to View → Freeze Panes

Use Case

When scrolling through thousands of records, the headers stay visible, making data easier to understand.




2️⃣ Grouping Data (Rows or Columns)

Grouping helps collapse and expand sections of data—ideal for large sheets with repeated categories.

How to Use

  1. Select the rows or columns you want to group
  2. Go to Data → Group
  3. Use the + / – signs to expand or collapse

Use Case

  • Group rows of monthly data under quarters
  • Hide detail rows and show only the summary
  • Manage long reports neatly


3️⃣ Print Options for Large Data Sets

Printing large worksheets can be messy unless properly formatted.

Key Print Tools

  • Print Titles: Repeat header rows on every page
    Page Layout → Print Titles → Rows to Repeat at Top
  • Fit Sheet on One Page
    File → Print → Scaling → Fit Sheet on One Page
  • Set Print Area
    Page Layout → Print Area → Set Print Area
  • Page Break Preview
    View → Page Break Preview to adjust how pages split

Use Case

Ensures large reports print cleanly with titles and without cutting columns.



4️⃣ Linking Worksheets (3D Formulas)

3D formulas allow calculations across multiple worksheets at once.

How to Use

Example: Sum cell B2 from Sheet1 to Sheet5:

=SUM(Sheet1:Sheet5!B2)

Use Case

Perfect for:

  • Monthly data sheets
  • Summarizing identical layouts (Jan–Dec)
  • Creating dashboards pulling from multiple sheets


5️⃣ Consolidating Data from Multiple Worksheets

Combine data from multiple sheets into one summary sheet.

How to Use

  1. Go to Data → Consolidate
  2. Choose function (Sum, Count, Average, etc.)
  3. Add references from multiple sheets
  4. Select “Top row” and “Left column” labels if needed
  5. Click OK

Use Case

  • Combine monthly sales into yearly summary
  • Merge repeated formats from different teams or regions
  • Build master reports automatically




🎯 Summary Table

Feature Purpose
Freeze Panes Keep headers visible while scrolling
Grouping Collapse/expand large sections of data
Print Options Print large sheets cleanly and professionally
3D Formulas Use formulas across multiple sheets
Consolidate Merge data from several sheets into one


Monday, December 8, 2025

OIC – Unified File Reprocessing Framework Using Scheduler Flag, Dual SFTP Sources, Child Integration, and Centralized Error Handling

📌 Use Case

Business processes often require file reprocessing and fresh processing from two different locations:

  1. OIC SFTP → for reprocessing previously failed files
  2. Vendor SFTP → for processing newly delivered source files

To manage both scenarios within a single automated pipeline, we use a Scheduler flag, a central Main Integration, and a dedicated Child Integration. A unified Global Fault Handler sends errors to CCS, logs them into Datadog, and moves failed files to an error directory.

This ensures stable, recoverable, and auditable file processing within OIC.


🛠️ Solution Steps


1️⃣ Scheduler Integration – Control the Processing Mode

The scheduler integration triggers the main service and passes a flag:

filePresentInOICSFTP = Y

  • System should process files from OIC SFTP
  • This is used for reprocessing failed files

filePresentInOICSFTP = N

  • System should process files from Vendor SFTP
  • This handles fresh daily load

This flag enables dynamic routing and avoids maintaining separate integrations for reprocessing.


2️⃣ Main Integration – File Discovery & Preparation

The main service behaves differently based on the flag.


✔️ A. If filePresentInOICSFTP = Y (Reprocessing Mode)

  • List files from OIC SFTP
  • For each file found:
    • Call the Child Integration
    • Pass file name + path + flag

No need to download or create new files—direct reprocessing.


✔️ B. If filePresentInOICSFTP = N (Fresh Run Mode)

  • List files from Vendor SFTP
  • For each file:
    • Download file from vendor SFTP
    • Write the file to OIC SFTP for internal processing
    • Call the Child Integration, passing the newly stored file name

This ensures all files—new or old—are processed uniformly through OIC SFTP.


3️⃣ Child Integration – Validation & File Handling

This integration performs the actual business logic.


✔️ Step A: Retrieve File Reference

Since files are stored in OIC SFTP in both modes:

  • Use File Server Action → Read File
  • Generate file reference for reading & parsing

✔️ Step B: Read File Content

  • Read the contents of the file using file reference
  • Parse the data based on flat file layout

✔️ Step C: Business Validation

Apply the core rule:

The required field must contain “00”.

If the condition fails:

  • Use Throw New Fault
    • This forces execution of the Global Fault Handler
    • Ensures controlled, trackable exceptions

✔️ Step D: Delete Source File

After reading:

  • Delete file from OIC SFTP

If file was originally from vendor SFTP:

  • The vendor SFTP file is already cleaned earlier by the main service
  • (or can be cleaned here based on your design)

This avoids duplicate processing.


4️⃣ Global Fault Handler – Centralized Error Management

When the child flow throws an exception:


✔️ A. Create Case in CCS (TO Module)

  • Send error details and file metadata
  • Trigger TO creation for functional follow-up

✔️ B. Log Errors to Datadog

Log structured fields:

  • File name
  • Validation failure message
  • Timestamp
  • Integration instance ID
  • Severity

This ensures observability and alerting.


✔️ C. Move the File to Error Directory

  • Take the original payload/file
  • Move it to OIC SFTP Error Folder
  • Prevents accidental deletion of faulty files
  • Allows manual reprocessing if needed

✔️ Final Outcome

This end-to-end design enables:

  • Single unified pipeline for fresh processing + reprocessing
  • Smart routing based on Scheduler flag
  • Guaranteed file availability in OIC SFTP for consistent logic
  • Strong validation using controlled Throw New Fault
  • Automatic integration with CCS for error cases
  • Observability via Datadog
  • Robust error-handling and file preservation
  • Minimal manual intervention

A complete, enterprise-grade framework for file ingestion & reprocessing in OIC.


Sunday, December 7, 2025

OIC – Ensuring Accurate Status Reporting in Real-Time Sync Integrations Using track_var_3

📌 Use Case

In real-time (synchronous) integrations, we noticed an issue where the OIC dashboard shows the instance status as Succeeded even though the internal flow actually failed.
This mismatch creates confusion for support teams and impacts monitoring because synchronous integrations do not properly propagate faults in certain failure scenarios.

To fix this, we implemented a custom tracking mechanism using track_var_3, where we manually mark the integration as SUCCESS or ERROR depending on the actual execution of the full flow.


🛠️ Solution Steps

1️⃣ Identify the Problem Scenario

  • The integration appears as Succeeded in the monitoring dashboard.
  • But inside the flow, some actions failed (for example: failed DB operation, REST error, or transformation issue).
  • No global fault was triggered, so OIC still treated it as a success.

2️⃣ Introduce a Tracking Variable (track_var_3)

Use the built-in tracking variables of OIC.

  • Reserve track_var_3 exclusively for storing the final status of the integration.
  • It will be updated as:
    • "SUCCESS" → when the full flow is executed without issues
    • "ERROR" → when any failure occurs

This variable is visible in monitoring and helps support quickly identify the real outcome.


3️⃣ Assign “SUCCESS” at the End of the Main Flow

At the last step of your primary flow:

  • Add an Assign Action
  • Set:
    track_var_3 = "SUCCESS"
    

This ensures the flow marks itself successful only after all business logic passes.


4️⃣ Handle Failures via Global Fault

In the Global Fault Handler, add:

  • An Assign Action
    track_var_3 = "ERROR"
    

This ensures:

  • Any unhandled fault
  • Any adapter failure
  • Any transformation error
    → Automatically marks the instance as ERROR.


5️⃣ Use the Tracking Variable for Support Monitoring

The support team will now monitor:

  • Instance status (may still show “Succeeded” due to OIC behaviour)
  • track_var_3 value (always accurate)

This provides the actual integration result.


6️⃣ Optional: Return Status to Calling App

In synchronous integrations:

  • Respond back to the caller with
    { "status": track_var_3 }
    

so they also get the correct success/error.


✔️ Final Outcome

With this approach:

  • Even if OIC marks the integration as “Succeeded”,
  • Your tracking variable clearly indicates the correct status
  • Support teams get accurate visibility
  • No missed failures in real-time sync services

Featured Post

OIC - Oracle Integration Cloud (OIC): Design Best Practices & Common Pitfalls to Avoid

Introduction Oracle Integration Cloud (OIC) is a powerful platform for connecting cloud and on-premises applications. However, poor integrat...