Thursday, July 10, 2025

OIC - Lessons Learned & Improvements in OIC Integrations

๐Ÿ“˜ Use Case

During various OIC projects, we identified recurring issues that impacted logging, error tracking, retry handling, and monitoring through tools like DataDog. Here’s a list of key observations and the solutions we applied.


1. Suppressed Error Details

  • Observation:
    OIC sends only a generic error message to DataDog or external logs, hiding the actual root cause.

  • Solution:
    Capture the actual faultMessage in error handlers and send it to DataDog along with other details for better troubleshooting.


2. No Retry for Temporary Errors

  • Observation:
    Transient connectivity or network issues fail immediately without any retry attempt.

  • Solution:
    Add retry logic using fault handlers or scopes for specific error types (like connection timeouts or 5xx errors).


3. Missing Correlation ID for Fusion Failures

  • Observation:
    When a Fusion ESS job fails, the logs don’t include any identifier like the ESS Job ID or request ID, making it hard to trace.

  • Solution:
    Extract and log the ESS request ID or other correlation IDs from Fusion and include them in your custom logs.


4. Payload Not Validated

  • Observation:
    OIC flows sometimes try to process empty or null payloads, which leads to schema errors or misleading messages.

  • Solution:
    Add condition checks early in the flow to verify if payloads contain data before proceeding to mappings or invokes.


5. Only Errors Logged, Not Success

  • Observation:
    DataDog or similar tools receive only error logs, and successful integrations are not tracked, affecting KPI reporting.

  • Solution:
    Log success cases as well, including important business identifiers like invoice number, PO number, or employee ID for better tracking.

6. Integration Timeout Not Handled

Observation:
Some long-running integrations fail due to timeout, especially when calling external systems that take time to respond.

Solution:
Adjust the timeout settings in the connection properties. Also, wrap such calls in a scope with timeout handling logic to provide custom error messages or fallback.

7. Overuse of Hardcoded Values

Observation:
Many integrations had hardcoded values for endpoints, credentials, or lookup keys, making them hard to migrate or scale.

Solution:
Use Lookups, Global Variables, and Connections smartly to externalize values. Parameterize as much as possible.

8. No Archival or Logging of Request Payloads

Observation:
When issues occurred, there was no record of what payload was received—making RCA difficult.

Solution:
Log incoming request payloads (masked if sensitive) to file server, UCM, or external logging systems before processing.

9. Overloaded Error Handlers Catching Everything

Observation:
A single generic error handler catches all faults, masking the actual error and causing confusion.

Solution:
Use specific fault handlers (like for Timeout, AuthenticationFailure, ServiceError) instead of one "catch-all" block. Customize messages accordingly.

10. Lack of Version Control or Documentation

Observation:
Integration flows were updated without tracking changes or maintaining documentation, making it difficult for others to manage.

Solution:
Maintain version notes or release logs.

Use naming conventions for integration versions.

Document integration logic, mappings, and lookups in a central repo or Confluence page.

11. Poor Use of Data Stitching (Unnecessary Variables)

Observation:
Multiple unnecessary variables and assignments are used where direct mapping or transformation would work.

Solution:
Optimize mappings and data handling. Use fewer intermediate variables and go for direct expressions or XSLT if needed.

12. Integration Not Idempotent

Observation:
Some integrations post the same data multiple times if retried, causing duplicates in target systems.

Solution:
Implement idempotency checks—use message IDs, reference numbers, or flags in the target system to avoid re-processing.

๐ŸŽฏ Outcome

Implementing these improvements helped us:

  • Get full visibility into success and failure cases
  • Reduce debugging time
  • Improve monitoring accuracy in tools like DataDog
  • Increase reliability of integrations with retry logic


Sunday, July 6, 2025

OIC - How to upload file to sharepoint using Microsoft graph API

๐Ÿ“Œ Use Case

In this use case, we are building an integration in Oracle Integration Cloud (OIC) that:

  1. Downloads a file from a File Server.
  2. Uploads that file to a specific SharePoint folder using Microsoft Graph API.

This is especially helpful in scenarios where enterprises manage data exports on file servers and want to automate data archival or sharing via SharePoint.


⚙️ Solution Design Overview

The integration follows these main steps:

  1. Trigger – A scheduled or REST-based trigger initiates the process.
  2. Fetch Site ID – Retrieves SharePoint Site ID using the server-relative path.
  3. Fetch Drive ID – Retrieves the Drive ID associated with the Site.
  4. Download File – Reads the file from the file server.
  5. Upload File – Uploads the file to SharePoint using Microsoft Graph API PUT call.

๐Ÿ” Step-by-Step Solution


✅ Step 1: Get SharePoint Site ID

  • REST Endpoint Name: GetSiteID
  • Method: GET
  • Relative URI:
    /sites/{tenant}.sharepoint.com%3A/sites/{server-relative-path}
    
  • Response Sample:
    {
      "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#sites/$entity",
      "createdDateTime": "2022-09-26T07:22:04.923Z",
      "description": "sp_org_app_DWCSSystemIntegration_qa",
      "id": "yourtenant.sharepoint.com,c87b311d-f1f0-4576-9f43-256b0366ccd4,315734c4-892c-486f-8901-5c8827144a16",
      "lastModifiedDateTime": "2024-08-23T11:25:16Z",
      "name": "sp_org_app_DWCSSystemIntegration_qa"
    }

✅ Step 2: Get SharePoint Drive ID

  • REST Endpoint Name: GetDriveID
  • Method: GET
  • Relative URI:
    /sites/{siteId}/drives
    
  • Query Parameter:
    $filter = name eq '<folder_name>'
    
  • Site ID is dynamically extracted from the previous response using an XSL mapping.

๐Ÿง  Conditionally Construct Filter Query:

  • Use substring-before() if parentpath has /
  • Else use as is

Sample response:

{

  "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#drives",

  "value": [

    {

      "id": "b!0mFabc12345def6789ghiJKLmnopQRSTuvwxYZaBCDE",

      "driveType": "documentLibrary",

      "name": "Documents",

      "webUrl": "https://yourtenant.sharepoint.com/sites/testsite/Shared%20Documents",

      "createdDateTime": "2023-04-20T10:30:00Z",

      "lastModifiedDateTime": "2024-03-15T08:45:00Z",

      "createdBy": {

        "user": {

          "displayName": "Admin User",

          "id": "admin-user-id"

        }

      },

      "lastModifiedBy": {

        "user": {

          "displayName": "Admin User",

          "id": "admin-user-id"

        }

      }

    },

    {

      "id": "b!9xYz321klmn456uvwXYZabcDEfghiJKLMNoPQrsTUv",

      "driveType": "documentLibrary",

      "name": "Shared Documents",

      "webUrl": "https://yourtenant.sharepoint.com/sites/testsite/Shared%20Documents"

    }

  ]

}



✅ Step 3: Download File from File Server

  • Action: Use File Adapter with Native File System (FS)
  • Read Mode: Binary
  • Output: Stream Reference

✅ Step 4: Upload File to SharePoint

  • REST Endpoint Name: UploadFileToSharepoint
  • Method: PUT
  • Relative URI:
    /drives/{driveid}/root:/{filename}:/content
    
  • Payload Format: Binary
  • Content-Type: Set as dynamic or static depending on file type
    Example: text/csv or application/octet-stream

๐Ÿงฉ Key Integration Design Elements

Component Description
Trigger REST or Schedule Trigger
File Server Native File Adapter
SharePoint Microsoft Graph API
Mapping Used to extract siteId, build filter, and construct headers
Headers Content-Type (optional but recommended)
Payload Binary Stream from File Adapter

๐Ÿ› ️ Pre-requisites

  • Microsoft Graph OAuth 2.0 Authentication configured in OIC
  • SharePoint API permissions:
    • Sites.Read.All
    • Files.ReadWrite.All
  • File server connection configured
  • OIC connectivity agent if on-prem file server

๐Ÿ“Œ Conclusion

With this approach, you can automate file transfers between a File Server and SharePoint seamlessly using OIC. This design is scalable and allows for dynamic path and filename handling, making it robust for real-world enterprise use cases.


Implementation screenshots:

Trigger:



Get file from file server



Get site id




Get drive id






Upload file to sharepoint







Thursday, July 3, 2025

OIC - How to Generate JWT CID Token with SHA256 Hash in Oracle Integration Cloud (OIC)

๐Ÿ” How to Generate JWT CID Token with SHA256 Hash in Oracle Integration Cloud (OIC)

๐Ÿงฉ Use Case

As part of secure API integration with HSBC (or any financial institution requiring strict identity/authentication enforcement), the client must send a JWT (JSON Web Token) as a CID (Client Identification Token) in the Authorization header of each API request. This token includes a signed hash (SHA-256) of the payload body to ensure message integrity.

This post walks you through how to:

  • Construct the JWT token using base64 encoded header and payload.
  • Generate the SHA256 hash of the payload body.
  • Sign the token using a private key.
  • Assemble and use the CID token in OIC integration.

⚙️ Components Used

  • OIC JavaScript Action to calculate SHA-256 hash.
  • Security Certificates: Private key to sign the JWT.
  • REST Adapter: To call target API with proper headers.
  • Mapper + Assign: To construct JWT parts and signature.

๐Ÿ—️ JWT Structure

A JWT consists of:

  1. Header – defines algorithm & token type.
  2. Payload – includes sub, aud, iat, jti, and most importantly, a payload_hash.
  3. Signature – created by signing Base64(Header) + "." + Base64(Payload) using private key.

Format:

JWT = BASE64URL(Header) + "." + BASE64URL(Payload) + "." + BASE64URL(Signature)

Sample Signature Input (from screenshots):

ASCII(BASE64URL(Header) + "." + BASE64URL(Payload))

๐Ÿ” OIC Implementation Steps

1️⃣ Step 1: Generate SHA-256 Hash of Payload

Create a JavaScript action SHA256Generator.js:

function checksum_sha256(inputStr) {
    var sha256_result = oic.checksum.sha256(inputStr, "sha-256");
    return sha256_result;
}

Pass the stringified JSON payload to this function before JWT creation.

Reference:

https://docs.oracle.com/en/cloud/paas/application-integration/integrations-user/import-library-file.html#GUID-D9638CD4-ADCE-4C8A-B5B3-1969086E642E


2️⃣ Step 2: Construct JWT Header

Example:

{
  "ver":"1.0",
  "typ": "JWT",
  "alg": "RS256",
  "kid": "CLP"
}

Base64URL encode this JSON string.


3️⃣ Step 3: Construct JWT Payload

Example payload:

{
  "sub": "CLP",
  "aud": "EPS",
  "payload_hash_alg": "SHA-256",
  "payload_hash": "<hash from JS function>",
  "iat": 1750411716,
  "jti": "91bee275c-a920-4ef9-ac39-1dbe3f50372d"
}

Use string.replace() in OIC to inject dynamic values like:

  • payload_hash – output of JS function
  • iat – current epoch time
  • jti – UUID (can be generated in integration)

4️⃣ Step 4: Sign JWT

Use the Security section of REST connection:

  • Upload private key (PKCS#8 format).
  • Use a custom signing policy to sign JWT with RS256.

Or use an external custom function to sign:

ASCII(Base64Url(Header) + "." + Base64Url(Payload)) → sign → Base64Url(Signature)

5️⃣ Step 5: Construct Final CID Token

Concatenate:

Authorization Header = "JWS " + Header + "." + Payload + "." + Signature

Set this string in the Authorization header of REST Adapter.


๐Ÿ“‹ Required Headers

  • Authorization | JWS <CID Token> 
  • Accept-Language | en-GB |
  • Forwarded-For | <IP Address> 
  • X-HSBC-Chnl-CountryCode | HK 
  • X-HSBC-Chnl-Group-Member | HBAP
  • X-HSBC-Global-Channel-Id | PARTNER
  • X-HSBC-Request-Correlation-Id | UUID
  • X-HSBC-Client-Id | CLP 
  • Content-Type | application/json |


✅ Final Output

A complete CID token is structured like:

JWS eyJ2ZX...<Header>.eyJzdW...<Payload>.X1c8Cp...<Signature>

It is passed to the Authorization header like:

Authorization: JWS eyJ2ZX...<Signature>

๐Ÿงช Testing & Validation

  • Use Postman or SoapUI to validate the generated JWT.
  • Tools like jwt.io help decode and verify token.
  • Ensure OIC has access to private key and correct time sync for iat.

๐Ÿ“Ž Reference

OIC implementation screenshots:

TBD

Sunday, June 22, 2025

OIC ERP - How to Restore Missed or exhausted business events in OIC

Receiving Missed Business Event in OIC

Step1: Deactivate an OIC Orchestration which has subscribed to ERP Business Events for PO Receipts.

Note: If the orchestration to be deactivated contains a business event subscription, a message is displayed asking if you want to delete the event subscription while deactivating the orchestration. If you select to delete the event subscription, the integration does not receive any events after it is reactivated. Below is just an example screenshot.

If you do not want to delete the event subscription, the events in this integration are resent if the integration is activated within six hours. Beyond 6 hours those requests will be exhausted.

Step2: Create PO Receipt. PO Receipt 10944 is created in fusion at 10:46 AM, during that time integration was deactivated.

Step3: Re-activate orchestration after some time.

Integration was activated at 1PM and we see that the Integration subscribed to the business event for the specific Receipt (10944)

Conclusion: Business events are retried in SaaS and automatically captured within 6 hours of unavailability of OIC services.

Restoring Exhausted Business Events

Step1: Deactivate Integration.

Integration is deactivated for one day. This integration subscribes to “PurchaseOrder” Business event.


Step2: Run API to find exhausted Business Events:

Total number of Exhausted Business event count in last 24 hours is 68.

API URL: <fusion url>/soa-infra/PublicEvent/diagnostic/exhaustedEventsDetail?lastHours=24&pageSize=100

/soa-infra/PublicEvent/diagnostic/exhaustedEventsCount?lastHours=24&pageSize=100

Note: We can find exhausted events for a specific Business Event like “PurchaseOrder” using Subscription ID. We have 43 requests which are exhausted in the last 24 hours. We have filtered out these using SubscriptionID.

 Step3: Activate the integration and restore the Exhausted event for “PurchaseOrder” using APIs

API URI: /soa-infra/PublicEvent/exhaustedEvents/restore

Sample Payload:

{

"subscriptionId": "(*****-***-***-**********-hy.integration.ap-hyderabad-1.ocp.oraclecloud.com):aHR0cHM6Ly9zb21pYy1vaWMtZGV2LWF4bGc4Ymlta2ZuZC1oeS5pbnRlZ3JhdGlvbi5hcC1oeWRlcmFiYWQtMS5vY3Aub3JhY2xlY2xvdWQuY29tL2ljL3dzL2ludGVncmF0aW9uL3YxL2Zsb3dzL2VycC9QT19FVkVOVC8xLjAv",

//"startDate": "29-04-2025 04:03:24",

//"endDate": "29-04-2025 10:20:25",

"lastHours":24

}

After restoring, check the count of Exhausted business event for “PurchaseOrder” is now 0

All 43 records have been subscribed through integration.

Conclusion: Restoring exhaust events is feasible via provided Oracle APIs

Poc document link:

https://docs.google.com/document/d/1E4KYFsKJhDrEEvYo9wQcHjZQraunWKgQ/edit?usp=drivesdk&ouid=105651791254983245041&rtpof=true&sd=true

Reference Document: https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=13339287498551&id=2751325.1&_afrWindowMode=0&_adf.ctrl-state=ccb6zmdcu_4



Thursday, June 19, 2025

OIC - Designing a Reusable Callback Integration for Multiple FBDI Uploads in Oracle Integration Cloud (OIC)

๐Ÿงพ Use Case Overview

In most Oracle Fusion implementations, File-Based Data Import (FBDI) is a widely used approach to load master and transactional data into Fusion Cloud. Each business object (like Employees, Items, Customers, Daily Rates, etc.) has a unique FBDI template and requires an integration that:

  1. Generates the FBDI ZIP file
  2. Uploads the file to UCM
  3. Submits an ESS Job (e.g., "Load Interface File for Import")
  4. Monitors the ESS job status
  5. Performs post-processing on success/failure

When you’re handling multiple business objects, step 4 and 5 are usually the same across integrations. Repeating this logic in every flow makes it:

  • Redundant
  • Hard to maintain
  • Prone to errors

๐Ÿ‘‰ So why not reuse this logic?


๐ŸŽฏ Goal

To create one common callback integration in OIC that can be invoked from any FBDI integration to:

  • Poll the ESS Job status
  • Handle success/failure
  • Perform downstream processing based on the business object

๐Ÿงฑ Architecture Overview

[ FBDI Integration: Employees     ] \
[ FBDI Integration: Items         ]  \
[ FBDI Integration: Daily Rates   ]   --> [ ๐Ÿ” Common Callback Integration ]
[ FBDI Integration: Customers     ]  /

Each main FBDI flow:

  • Ends by calling the Common Callback Integration
  • Sends a payload with:
    • requestId (ESS Job ID)
    • businessObject (like "EMPLOYEES")
    • fileName, submittedBy, etc.

๐Ÿงฐ Prerequisites

  • Oracle Integration Cloud Gen 2/3
  • ERP Cloud Adapter and SOAP connection to ERPIntegrationService
  • Basic understanding of:
    • FBDI process
    • ESS Jobs in Fusion
    • While/Switch activities in OIC

๐Ÿงญ Detailed Implementation Steps


Step 1: FBDI Integration Flow (Example: Daily Rates)

This is your normal FBDI flow:

  1. Read source data
  2. Transform and generate FBDI .zip file
  3. Upload to UCM using ERP Cloud Adapter
  4. Submit ESS Job using submitESSJobRequest
  5. Capture requestId from the response
  6. Call Common Callback Integration with a payload:
{
  "requestId": "456789",
  "businessObject": "DAILY_RATES",
  "fileName": "DailyRates_20250618.zip",
  "submittedBy": "ManojKumar"
}

Step 2: Create the Common Callback Integration

Integration Type: App-Driven Orchestration
Trigger: REST Adapter (POST operation)

๐Ÿ“ฅ Input JSON Schema:

{
  "requestId": "string",
  "businessObject": "string",
  "fileName": "string",
  "submittedBy": "string"
}

Step 3: Parse and Assign Variables

  • Assign requestId, businessObject, and other fields to local variables.
  • Initialize:
    status = ""
    loopCount = 0
    

๐Ÿ” Step 4: Implement Polling Logic using While Loop

Condition:

status != "SUCCEEDED" AND status != "ERROR" AND loopCount < 20

Inside the loop:

  1. Call getESSJobStatus via ERPIntegrationService SOAP connection
  2. Parse response:
    <JobStatus>SUCCEEDED</JobStatus>
    <Message>Completed successfully</Message>
    
  3. Assign status to local variable
  4. Wait for 1 minute (use Wait activity)
  5. Increment loopCount += 1

๐Ÿง  Step 5: Decision Based on Status

After exiting the loop, check if:

  • status == "SUCCEEDED": proceed with business logic
  • status == "ERROR": log failure and send notification

๐Ÿงช Step 6: Use Switch for Business Object-Specific Logic

Switch on businessObject:
├── "DAILY_RATES"   → Call Daily Rates post-processing
├── "EMPLOYEES"     → Call Employees HDL flow
├── "ITEMS"         → Write data to DB or update flag
├── "CUSTOMERS"     → Trigger BIP report / send confirmation

Use Local Integration Calls or inline logic as needed.


Step7: Output we Can Fetch After getESSJobStatus

When getESSJobStatus completes, the response includes a reportFile or document ID that points to the output/log files. We can fetch:

  1. .log file (execution log)
  2. .out file (output message, summary of load)
  3. .csv error file (for rows that failed)

Call getESSJobExecutionDetails (Optional)

We can invoke another operation (if available) to get details of the child job, if the job is a job set or composite.

Alternative Approach (Preferred):

Use ERPIntegrationService.downloadESSJobExecutionDetails or UCM file download API to download the .log and .out files using requestId.


Use UCM Web Service to Download Files

Once the ESS job runs, output files are stored in UCM. We can Call ERPIntegrationService > downloadExportOutput

Input: requestId >> You’ll get a base64 file content >> Parse it or store it in DB or FTP for audit

Or use WebCenter Content API (UCM API) to list files using requestId and download

Sample Output from .out File (Import Summary)

Total Records Read: 100  
Successfully Imported: 95  
Failed Records: 5  
Log File: import_daily_rates.log

๐Ÿ“ง Step 8: Optional Email Notification

Send an email with:

  • ESS Job Result
  • File name
  • Business object
  • Message or error (if failed)

๐Ÿ“‚ Sample getESSJobStatus Request Payload (SOAP)

<typ:getESSJobStatusRequest>
   <typ:requestId>456789</typ:requestId>
</typ:getESSJobStatusRequest>

Sample Response:

<typ:getESSJobStatusResponse>
   <typ:JobStatus>SUCCEEDED</typ:JobStatus>
   <typ:Message>Completed successfully</typ:Message>
</typ:getESSJobStatusResponse>

๐Ÿšจ Error Handling Strategy

  • If ESS Job fails (ERROR), log:
    • requestId
    • businessObject
    • error message
  • Store in DB or call a notification integration
  • Enable retry if needed

๐Ÿ’ก Best Practices

  • Set a polling limit (e.g., 20 retries = ~20 mins)
  • Avoid infinite loops
  • Use consistent naming conventions for businessObject
  • Create reusable sub-integration flows for downstream processing
  • Add logging and tracking (e.g., via ATP/Logging framework)

๐Ÿš€ Enhancements We Can Add

  • Add DB persistence for incoming callback metadata
  • Scheduled Integration to recheck failed jobs
  • Audit dashboard for all FBDI callbacks
  • Notify users in MS Teams / Slack using Webhook

Conclusion

Building a common callback integration for all FBDI flows:

  • Makes your integrations modular and maintainable
  • Reduces redundancy
  • Centralizes your error handling and monitoring

This pattern can be extended to HCM Extracts, BIP report monitoring, and ESS job chains as well.


๐Ÿ“ฆ Sample Naming Suggestions

Artifact Name
Integration INT_COMMON_ESS_CALLBACK
SOAP Connection ERPIntegrationServiceSOAP
Variable: requestId varRequestId
Variable: loop counter varLoopCount
Email Subject FBDI ${businessObject} - Job ${status}


OIC Gen3 - New Feature - File Polling Feature using FTP trigger in OIC

Unlocking Efficient FTP Triggers: Using the New File‑Polling Feature in Oracle Integration Cloud (OIC Gen3)

Subtitle:
Learn how to automate smaller file reads from FTP servers using the built‑in file‑polling trigger in OIC Gen3 24.10+.

๐Ÿ›  Use Case

Many integration scenarios require processing files placed onto an FTP server—like daily CSV or XML reports—without manual intervention. Prior to OIC Gen3 24.10, triggering on file arrival involved workarounds such as scheduled scripts or custom polling logic.

With the new File‑Polling feature, you can:

  • Trigger OIC integrations based on new files matching a naming pattern.
  • Auto‑load file contents as payload—ideal for lightweight file reads.
  • Configure archive, delete, or reject handling.
  • Avoid downloads, saving bandwidth and simplifying flow.

๐Ÿ”ง Solution Overview: Step‑by‑Step

  1. Ensure Compatibility
    Verify you're running OIC Gen3 version 24.10 or higher—this is when FTP file‑polling became available.

  2. Set Up FTP Connection
    In your OIC connection settings, choose or configure your FTP/SFTP source.

  3. Use File‑Polling Trigger
    In the integration builder, select the “File Polling” trigger. You’ll see options for:

    • Polling frequency (e.g., every 5 minutes)
    • Source directory
    • Filename pattern (e.g., *.csv)
    • Schema type (CSV, XML), plus sample file upload support
  4. File Handling Options
    Decide what happens after triggering:

    • Archive to another folder
    • Move after successful read
    • Delete automatically
    • Ignore delete‑errors to prevent retries
    • Reject invalid files
  5. Design Integration Flow
    After the trigger, use the file’s contents payload to:

    • Parse with a schema
    • Route data to downstream systems
    • Handle errors via reject logic
  6. Test and Validate (POC)
    Always run a proof‑of‑concept:

    • Drop a test file matching your pattern
    • Confirm the integration triggered as expected
    • Validate the post‑processing behavior (archive/move/delete)
  7. Deploy and Monitor
    Once verified, deploy your integration. Monitor success/failure and adjust polling or file‑handling parameters as needed.

The below we see the demonstration how to poll a file:







Saturday, June 14, 2025

OIC - Monitoring and Troubleshooting Integrations in Oracle Integration Cloud (OIC) Gen 3: A Practical Guide

๐Ÿ“˜ Use Case:

As an OIC developer or integration lead, you often need to monitor live integrations for performance, failures, and latency issues. With OIC Gen 3, enhanced tools like Observability dashboard, Projects tab, and Activity Stream help you quickly identify, trace, and resolve issues.

๐Ÿ› ️ Solution Steps: Monitoring & Troubleshooting in OIC Gen 3

๐Ÿ” 1. Observability Dashboard (Home → Observability)

  • View overall system health:
    • Success/failure trends
    • Execution volumes
    • Error frequency by integration
  • Helps you monitor across all projects and integrations.

๐Ÿ“Œ Tip: Use date filters and drill into specific time windows for better root cause analysis.


๐Ÿ“ 2. Projects Tab + Observe Subtab (Projects → [Your Project] → Observe)

The Observe tab under Projects gives real-time, project-specific analytics.

Key Features:

  • Live status of all integrations in the project.
  • Visual indicators for:
    • Failed instances
    • Slow executions
    • Backlogged or retrying flows
  • You can click an integration to view:
    • Run history
    • Payload details
    • Fault/error points

๐Ÿ“Œ Use Case: Great for team-based development — each team can monitor and troubleshoot flows relevant to their assigned project.


๐Ÿ”„ 3. Activity Stream (Monitor → Tracking)

  • Track integration instances using filters:
    • Integration name
    • Status (failed, completed)
    • Date/time range
  • Open individual runs to:
    • View complete execution trace
    • See error messages and transformation values
    • Access input/output payloads

๐Ÿ“Œ Pro Tip: Combine this with the Observe view to jump from high-level KPIs to instance-level diagnostics.


๐Ÿงฐ 4. Diagnostic Logs (Observability → Diagnostic Logs)

  • Search logs using:
    • Integration name
    • Flow instance ID
    • Timestamps
  • Useful for back-end or infrastructure issues, not visible in instance-level logs.

๐Ÿ”” 5. Notifications & Alerts

  • Use Integration Insight or configure external notification logic to send alerts when:
    • Integration fails repeatedly
    • A flow runs unusually long
    • SLA thresholds are breached

๐Ÿงช 6. Replay and Testing

  • For eligible integrations, replay failed instances after correcting data or logic.
  • Also supports test executions from integration canvas for verification before redeploying.



Featured Post

OIC - OIC Utility to Reprocess Failed Real-Time Integration JSON Payloads

๐Ÿ“Œ Use Case In real-time OIC integrations, JSON payloads are exchanged with external systems via REST APIs. When such integrations fail (du...