Tuesday, December 9, 2025

Microsoft Excel - Working with Large sets of excel data

📘 Microsoft Excel – Working with Large Sets of Data

Managing large datasets in Excel becomes easier when you use the right tools. Below are the most important features to improve navigation, analysis, and printing.


1️⃣ Freeze Panes – Keep Headers Visible

Freeze Panes helps you lock specific rows or columns so they remain visible while scrolling through large files.

How to Use

  • Freeze Top Row
    View → Freeze Panes → Freeze Top Row
  • Freeze First Column
    View → Freeze Panes → Freeze First Column
  • Freeze Specific Rows & Columns
    1. Select the cell below the row and right of the column you want to freeze
    2. Go to View → Freeze Panes

Use Case

When scrolling through thousands of records, the headers stay visible, making data easier to understand.




2️⃣ Grouping Data (Rows or Columns)

Grouping helps collapse and expand sections of data—ideal for large sheets with repeated categories.

How to Use

  1. Select the rows or columns you want to group
  2. Go to Data → Group
  3. Use the + / – signs to expand or collapse

Use Case

  • Group rows of monthly data under quarters
  • Hide detail rows and show only the summary
  • Manage long reports neatly


3️⃣ Print Options for Large Data Sets

Printing large worksheets can be messy unless properly formatted.

Key Print Tools

  • Print Titles: Repeat header rows on every page
    Page Layout → Print Titles → Rows to Repeat at Top
  • Fit Sheet on One Page
    File → Print → Scaling → Fit Sheet on One Page
  • Set Print Area
    Page Layout → Print Area → Set Print Area
  • Page Break Preview
    View → Page Break Preview to adjust how pages split

Use Case

Ensures large reports print cleanly with titles and without cutting columns.



4️⃣ Linking Worksheets (3D Formulas)

3D formulas allow calculations across multiple worksheets at once.

How to Use

Example: Sum cell B2 from Sheet1 to Sheet5:

=SUM(Sheet1:Sheet5!B2)

Use Case

Perfect for:

  • Monthly data sheets
  • Summarizing identical layouts (Jan–Dec)
  • Creating dashboards pulling from multiple sheets


5️⃣ Consolidating Data from Multiple Worksheets

Combine data from multiple sheets into one summary sheet.

How to Use

  1. Go to Data → Consolidate
  2. Choose function (Sum, Count, Average, etc.)
  3. Add references from multiple sheets
  4. Select “Top row” and “Left column” labels if needed
  5. Click OK

Use Case

  • Combine monthly sales into yearly summary
  • Merge repeated formats from different teams or regions
  • Build master reports automatically




🎯 Summary Table

Feature Purpose
Freeze Panes Keep headers visible while scrolling
Grouping Collapse/expand large sections of data
Print Options Print large sheets cleanly and professionally
3D Formulas Use formulas across multiple sheets
Consolidate Merge data from several sheets into one


Monday, December 8, 2025

OIC – Unified File Reprocessing Framework Using Scheduler Flag, Dual SFTP Sources, Child Integration, and Centralized Error Handling

📌 Use Case

Business processes often require file reprocessing and fresh processing from two different locations:

  1. OIC SFTP → for reprocessing previously failed files
  2. Vendor SFTP → for processing newly delivered source files

To manage both scenarios within a single automated pipeline, we use a Scheduler flag, a central Main Integration, and a dedicated Child Integration. A unified Global Fault Handler sends errors to CCS, logs them into Datadog, and moves failed files to an error directory.

This ensures stable, recoverable, and auditable file processing within OIC.


🛠️ Solution Steps


1️⃣ Scheduler Integration – Control the Processing Mode

The scheduler integration triggers the main service and passes a flag:

filePresentInOICSFTP = Y

  • System should process files from OIC SFTP
  • This is used for reprocessing failed files

filePresentInOICSFTP = N

  • System should process files from Vendor SFTP
  • This handles fresh daily load

This flag enables dynamic routing and avoids maintaining separate integrations for reprocessing.


2️⃣ Main Integration – File Discovery & Preparation

The main service behaves differently based on the flag.


✔️ A. If filePresentInOICSFTP = Y (Reprocessing Mode)

  • List files from OIC SFTP
  • For each file found:
    • Call the Child Integration
    • Pass file name + path + flag

No need to download or create new files—direct reprocessing.


✔️ B. If filePresentInOICSFTP = N (Fresh Run Mode)

  • List files from Vendor SFTP
  • For each file:
    • Download file from vendor SFTP
    • Write the file to OIC SFTP for internal processing
    • Call the Child Integration, passing the newly stored file name

This ensures all files—new or old—are processed uniformly through OIC SFTP.


3️⃣ Child Integration – Validation & File Handling

This integration performs the actual business logic.


✔️ Step A: Retrieve File Reference

Since files are stored in OIC SFTP in both modes:

  • Use File Server Action → Read File
  • Generate file reference for reading & parsing

✔️ Step B: Read File Content

  • Read the contents of the file using file reference
  • Parse the data based on flat file layout

✔️ Step C: Business Validation

Apply the core rule:

The required field must contain “00”.

If the condition fails:

  • Use Throw New Fault
    • This forces execution of the Global Fault Handler
    • Ensures controlled, trackable exceptions

✔️ Step D: Delete Source File

After reading:

  • Delete file from OIC SFTP

If file was originally from vendor SFTP:

  • The vendor SFTP file is already cleaned earlier by the main service
  • (or can be cleaned here based on your design)

This avoids duplicate processing.


4️⃣ Global Fault Handler – Centralized Error Management

When the child flow throws an exception:


✔️ A. Create Case in CCS (TO Module)

  • Send error details and file metadata
  • Trigger TO creation for functional follow-up

✔️ B. Log Errors to Datadog

Log structured fields:

  • File name
  • Validation failure message
  • Timestamp
  • Integration instance ID
  • Severity

This ensures observability and alerting.


✔️ C. Move the File to Error Directory

  • Take the original payload/file
  • Move it to OIC SFTP Error Folder
  • Prevents accidental deletion of faulty files
  • Allows manual reprocessing if needed

✔️ Final Outcome

This end-to-end design enables:

  • Single unified pipeline for fresh processing + reprocessing
  • Smart routing based on Scheduler flag
  • Guaranteed file availability in OIC SFTP for consistent logic
  • Strong validation using controlled Throw New Fault
  • Automatic integration with CCS for error cases
  • Observability via Datadog
  • Robust error-handling and file preservation
  • Minimal manual intervention

A complete, enterprise-grade framework for file ingestion & reprocessing in OIC.


Sunday, December 7, 2025

OIC – Ensuring Accurate Status Reporting in Real-Time Sync Integrations Using track_var_3

📌 Use Case

In real-time (synchronous) integrations, we noticed an issue where the OIC dashboard shows the instance status as Succeeded even though the internal flow actually failed.
This mismatch creates confusion for support teams and impacts monitoring because synchronous integrations do not properly propagate faults in certain failure scenarios.

To fix this, we implemented a custom tracking mechanism using track_var_3, where we manually mark the integration as SUCCESS or ERROR depending on the actual execution of the full flow.


🛠️ Solution Steps

1️⃣ Identify the Problem Scenario

  • The integration appears as Succeeded in the monitoring dashboard.
  • But inside the flow, some actions failed (for example: failed DB operation, REST error, or transformation issue).
  • No global fault was triggered, so OIC still treated it as a success.

2️⃣ Introduce a Tracking Variable (track_var_3)

Use the built-in tracking variables of OIC.

  • Reserve track_var_3 exclusively for storing the final status of the integration.
  • It will be updated as:
    • "SUCCESS" → when the full flow is executed without issues
    • "ERROR" → when any failure occurs

This variable is visible in monitoring and helps support quickly identify the real outcome.


3️⃣ Assign “SUCCESS” at the End of the Main Flow

At the last step of your primary flow:

  • Add an Assign Action
  • Set:
    track_var_3 = "SUCCESS"
    

This ensures the flow marks itself successful only after all business logic passes.


4️⃣ Handle Failures via Global Fault

In the Global Fault Handler, add:

  • An Assign Action
    track_var_3 = "ERROR"
    

This ensures:

  • Any unhandled fault
  • Any adapter failure
  • Any transformation error
    → Automatically marks the instance as ERROR.


5️⃣ Use the Tracking Variable for Support Monitoring

The support team will now monitor:

  • Instance status (may still show “Succeeded” due to OIC behaviour)
  • track_var_3 value (always accurate)

This provides the actual integration result.


6️⃣ Optional: Return Status to Calling App

In synchronous integrations:

  • Respond back to the caller with
    { "status": track_var_3 }
    

so they also get the correct success/error.


✔️ Final Outcome

With this approach:

  • Even if OIC marks the integration as “Succeeded”,
  • Your tracking variable clearly indicates the correct status
  • Support teams get accurate visibility
  • No missed failures in real-time sync services

Tuesday, December 2, 2025

OIC - Sharing Lookup Values Across Projects Using dvm:sharedLookupValue

Use Case

In many enterprise integrations, multiple projects need to reference the same configuration data such as:

  • SFTP locations
  • Integration IDs
  • File paths
  • Environment-specific values

Maintaining separate lookup files for each project leads to:

  • Data inconsistency
  • High maintenance effort
  • Deployment overhead across environments

To avoid this, we use a Shared Lookup (DVM) that can be accessed across different OIC projects using the built-in function:

dvm:sharedLookupValue()

This ensures centralized configuration management and reusability.


Solution Approach

We store common configuration values in a Shared Lookup (DVM) under one project and access it from another project using the sharedLookupValue function inside an Assign action.

Step 1: Create a Shared Lookup

In the source project:

  • Create a lookup (DVM) named:
    Common_CCS_Interface_SFTP_Lookup
    
  • Example structure:
IntegrationId SourceFileLocation
INT105 /ftp/inbound/path

Step 2: Call Shared Lookup from Another Project

In the target project, use an Assign Action with this expression:

dvm:sharedLookupValue(
  'CUSTOMER_BO2_01',
  'Common_CCS_Interface_SFTP_Lookup',
  'IntegrationId',
  'INT105',
  'SourceFileLocation',
  'NA'
)

Parameter Explanation

Parameter Description
CUSTOMER_BO2_01 Source project code
Common_CCS_Interface_SFTP_Lookup Shared lookup name
IntegrationId Source column
INT105 Source value
SourceFileLocation Target column
NA Default value if no match found
Benefits
  • ✅ Single source of truth
  • ✅ Reduced deployment effort
  • ✅ Easy maintenance
  • ✅ Cross-project reusability
  • ✅ Environment-specific control

Conclusion

Using dvm:sharedLookupValue in Oracle Integration Cloud enables seamless sharing of lookup values across projects, improves governance, and significantly reduces operational overhead. This approach is highly recommended for managing common SFTP paths, integration identifiers, and environment configurations in large OIC programs.








Monday, November 24, 2025

OIC - Sorting Unifier UDR Data by Latest Creation Date in OIC Using XSLT

 🔍 Use Case

In an Oracle Integration Cloud (OIC) flow, we receive Unifier UDR report rows where the creation date is in MM/DD/YYYY HH:MM AM/PM format.
To fetch only the latest project record, we must:

  1. Convert the UDR date/time to a sortable format → YYYYMMDDHHMM00
  2. Sort all rows in descending order
  3. Pick the latest row’s details

This allows us to correctly extract the most recent department/project data for further processing.


🛠️ Solution Steps

1. Extract Date Components Using XSLT

From input (example):
05/21/2025 07:15 PM

We extract:

  • Month → 05
  • Day → 21
  • Year → 2025

2. Extract Time Components

  • Hour → 07
  • Minute → 15
  • AM/PM → PM

3. Convert 12-Hour Time to 24-Hour Format

Example conversion logic:

Input Output
07:15 PM 19:15
12:30 AM 00:30
11:45 AM 11:45

In XSLT:

  • If PM and hour≠12 → hour + 12
  • If AM and hour=12 → 00

4. Combine Into Sortable Format

Final sortable value:

YYYYMMDDHHMM00

Example:

Input:
05/21/2025 07:15 PM

Converted:
20250521191500

5. Sort and Pick Latest Record

Use XSLT <xsl:sort order="descending"> to ensure the newest timestamp appears first.

6. Build Output Structure

After sorting, the latest record is passed to the next system inside:

<NewProjectDetails>
    <departmentId>...</departmentId>
    <projectId>...</projectId>
    <creationTime>20250521191500</creationTime>
</NewProjectDetails>

✔️ Final Result

By converting the date/time into a sortable format and applying descending sort, OIC always fetches the most recent project details, even if multiple rows exist for different departments.

Following xslt to convert from 05/21/2025 07:15 PM to sortable format: YYYYMMDDHHMM00

<xsl:template match="/" xml:id="id_11">

    <nstrgmpr:Write xml:id="id_12">

        <ns40:request-wrapper>

            <xsl:for-each select="$GetlatestDepartmentId1/nsmpr1:executeResponse/ns33:response-wrapper/ns33:data/ns33:report_row">

                <!-- Extract Date parts (MM/DD/YYYY) -->

                <xsl:variable name="month" select="substring(ns33:c3, 1, 2)"/>

                <xsl:variable name="day" select="substring(ns33:c3, 4, 2)"/>

                <xsl:variable name="year" select="substring(ns33:c3, 7, 4)"/>

                <!-- Extract Time parts (HH:MM AM/PM) -->

                <xsl:variable name="timePart" select="substring-after(ns33:c3, ' ')"/>

                <xsl:variable name="hourStr" select="substring($timePart, 1, 2)"/>

                <xsl:variable name="minute" select="substring($timePart, 4, 2)"/>

                <xsl:variable name="amPm" select="substring($timePart, 7, 2)"/>

                <!-- Convert 12-hour hour to 24-hour -->

                <xsl:variable name="hour24">

                    <xsl:choose>

                        <!-- If PM and hour is not 12, add 12 -->

                        <xsl:when test="$amPm = 'PM' and number($hourStr) != '12'">

                            <xsl:value-of select="number($hourStr) + 12"/>

                        </xsl:when>

                        <!-- If AM and hour is 12 (midnight), set to 00 -->

                        <xsl:when test="$amPm = 'AM' and number($hourStr) = '12'">

                            <xsl:value-of select="'00'"/>

                        </xsl:when>

                        <!-- Otherwise, use the hour as is -->

                        <xsl:otherwise>

                            <xsl:value-of select="$hourStr"/>

                        </xsl:otherwise>

                    </xsl:choose>

                </xsl:variable>

                <!-- Pad single digit hours with a leading zero if necessary (e.g., 9 becomes 09) -->

                <xsl:variable name="paddedHour">

                    <xsl:choose>

                        <xsl:when test="string-length($hour24) = 1">

                            <xsl:value-of select="concat('0', $hour24)"/>

                        </xsl:when>

                        <xsl:otherwise>

                            <xsl:value-of select="$hour24"/>

                        </xsl:otherwise>

                    </xsl:choose>

                </xsl:variable>

                <ns40:NewProjectDetails>

                    <ns40:departmentId>

                        <xsl:value-of select="ns33:c1"/>

                    </ns40:departmentId>

                    <ns40:projectId>

                        <xsl:value-of select="ns33:c2"/>

                    </ns40:projectId>

                    <ns40:creationTime>

                        <xsl:value-of select="concat($year, '-', $month, '-', $day, ' ', $paddedHour, ':', $minute, ':00')"/>

                    </ns40:creationTime>

                </ns40:NewProjectDetails>

            </xsl:for-each>

        </ns40:request-wrapper>

    </nstrgmpr:Write>

</xsl:template>


Following xslt to sort data based on sortable creation date and map latest data:

<xsl:template match="/" xml:id="id_11">

    <nstrgmpr:execute xml:id="id_12">

        <nstrgmpr:HTTPHeaders>

            <ns54:StandardHTTPHeaders>

                <ns54:Authorization>

                    <xsl:value-of select="concat (&quot;Bearer &quot;,                     $GetUnifierToken/nsmpr1:executeResponse/ns36:response-wrapper/ns36:data/ns36:access_token )"/>

                </ns54:Authorization>

            </ns54:StandardHTTPHeaders>

        </nstrgmpr:HTTPHeaders>

        <ns52:request-wrapper xml:id="id_41">

            <ns52:reportname xml:id="id_45">

                <xsl:value-of xml:id="id_46" select="dvm:lookupValue (&quot;ABC_Common_CORS_Static_Data_Lookup&quot;, &quot;Key&quot;,                  &quot;UNIFIER_USERGROUPBYDEPARTMENT_UDR_REPORTNAME&quot;, &quot;Value&quot;, &quot;NA&quot; )"/>

            </ns52:reportname>

            <ns52:query xml:id="id_42">

                <ns52:label xml:id="id_100">

                    <xsl:value-of xml:id="id_101" select="'PROJECTNUMBER'"/>

                </ns52:label>

                <xsl:for-each select="$WriteDepartmentId REQUEST/nsmpr3:Write/ns38:request-wrapper/ns38:NewProjectDetails">

                    <xsl:sort select="ns38:creationTime" data-type="xsd:string" order="descending"/>

                    <xsl:if test="position() = 1">

                        <ns52:value1>

                            <xsl:value-of select="ns38:departmentId"/>

                        </ns52:value1>

                    </xsl:if>

                </xsl:for-each>

            </ns52:query>

        </ns52:request-wrapper>

    </nstrgmpr:execute>

</xsl:template>




Wednesday, November 19, 2025

OIC - Using Dynamic Paths for OCI Object Storage in OIC Integrations | Handling “Subfolders” in OCI Object Storage bucket

Use Case

In Oracle Integration Cloud (OIC), when working with Oracle Object Storage, “folders” inside a bucket are not real directories. They are simply part of the object name (key).
Because of this, OIC cannot dynamically navigate subfolders the same way as SFTP or FTP directories.

So when an integration requires:

  • Upload Path
  • Archive Path
  • Error/Reject Path
  • Dynamic Environment-specific Paths (DEV/TEST/PROD)

…you must pass these paths as string fields and build the full object name by concatenating path + filename.

This approach is required to handle dynamic folder structures, avoid hardcoding paths, and support multiple environments using a single integration.


Solution Overview

To manage folder-like object prefixes correctly in OIC:

  1. Create lookup/variables for folder paths (upload, archive, error).
  2. Pass these as integration input or derive from a lookup.
  3. Concatenate the path with the file name to create the full object key.
  4. Use the Object Storage adapter with the generated key (object name).
  5. Avoid assuming physical directories—treat each folder as a prefix.

Solution Steps

Step 1: Create Path Fields

Create the following fields in your integration or lookup:

  • BucketName - bkt-dev-payment
  • uploadPath – e.g., /incoming/employee/ or / for no subfolder.
  • archivePath – e.g., /archive/employee/ or / for no subfolder
  • errorPath – e.g., /error/employee/ or / for no subfolder.

Ensure each ends with a trailing slash.


Step 2: Accept or Fetch File Name

From the source system or from the file adapter, capture the file name:

Example:
EMPLOYEE_20251119.csv


Step 3: Construct the Object Key

Use an assign activity in OIC:

fullObjectName = uploadPath || fileName

Examples:

incoming/employee/EMPLOYEE_20251119.csv
archive/employee/EMPLOYEE_20251119.csv

This string is the actual object key in Object Storage.


Step 4: Configure Object Storage Adapter

In the adapter configuration:

  • Select "Specify Object Name"
  • Map it to the concatenated value (fullObjectName)

No directory browsing is required because subfolders are just text prefixes.


Step 5: Write or Read the Object

Use the adapter normally — OIC will treat the full object key as the complete path.

  • Write → Upload to path-like object key
  • Read → Fetch file using the exact key
  • Move/Archive → Upload the same file under the archive key and delete the original

Step 6: Repeat for Archive and Error Handling

During archiving:

archiveObjectName = archivePath || fileName

During error processing:

errorObjectName = errorPath || fileName

Solution with screenshots:

Upload/archive/getfile:
URI: /n/{namespaceName}/b/{bucketName}/o/{subfolder}{objectName}
Map the subfolder path from lookup:
dvm:lookupValue("<LookupName>_Common_CCS_Interface_SFTP_Lookup",
                "IntegrationId",
                $Var_InterfaceId,
                "ArchiveBucketSubfolder",
                "/")




List files:
Create a variable and store the bucket sub folder path 
fn:substring-after(
    dvm:lookupValue(
        "<LookupName>_Common_CCS_Interface_SFTP_Lookup",
        "IntegrationId",
        $Var_InterfaceID,
        "BucketSubfolder",
        "/"
    ),
    "/"
)
Add the bucket subfolderpath as prefix to the object name
concat(
    Var_BucketSubfolderPath,
    replace(
        lookupValue(
            "<LookupName>_Common_CCS_Interface_SFTP_Lookup",
            "IntegrationId",
            Var_InterfaceID,
            "SourceFileName",
            "NA"
        ),
        "YYYYMMDD",
        concat(
            substring(FileProcessingDate, 3.0, 2.0),
            substring(FileProcessingDate, 5.0, 2.0),
            substring(FileProcessingDate, 7.0, 2.0)
        )
    )
)






Rename file:
This is same as list files, need to add the subfolder prefix to the object name.




Final Note

This approach is the recommended design pattern because Object Storage does not support directory navigation. Treating paths as string prefixes ensures full flexibility and environment portability in all OIC integrations.


Sunday, November 16, 2025

Microsoft Excel - Pivot Tables

1. Understanding Excel PivotTables

A PivotTable is a tool that summarizes, analyzes, and presents large amounts of data quickly.
It helps you group, filter, total, count, and compare data without writing formulas.


2. Creating an Excel PivotTable

  1. Select your data range (with headers).
  2. Go to Insert → PivotTable.
  3. Choose where to place the PivotTable (new or existing worksheet).
  4. Drag fields into:
    • Rows → categories
    • Columns → comparison groups
    • Values → totals/counts
    • Filters → top-level filter







3. Modifying Excel PivotTable Calculations

You can change the calculation in Values:

  • Click the field → Value Field Settings
  • Choose Sum, Count, Average, Max, Min, % of total, etc.

Useful for switching from SUM to COUNT or AVERAGE easily.


4. Grouping PivotTable Data

You can group:

  • Dates → months, quarters, years
  • Numbers → group into ranges (e.g., 0–100, 101–200)
  • Text → manually group selected items

Right-click → Group.



5. Formatting PivotTable Data

You can format:

  • Numbers (currency, percentage, comma style)
  • Layout (Compact/Outline/Tabular)
  • Design (PivotTable Styles)
  • Remove/Show Grand Totals & Subtotals

Go to PivotTable Design and Layout options.





6. Modifying PivotTable Calculations (Repeated Topic)

This includes:

  • Changing the summary function
  • Calculating % of row, % of column, % of grand total
  • Showing difference from another value
  • Adding calculated fields (PivotTable Analyze → Fields → Calculated Field)






7. Drilling Down into PivotTable Data

To see detailed records:

  • Double-click any number in the PivotTable
    Excel creates a new sheet showing the underlying rows that make that number.



8. Creating PivotCharts

  1. Click anywhere inside the PivotTable
  2. Go to Insert → PivotChart
  3. Choose chart type (Column, Line, Pie, Bar, etc.)

PivotCharts update automatically when the PivotTable is refreshed.



9. Filtering PivotTable Data

You can filter using:

  • Filters area (top-level filter)
  • Row/Column labels filter (label, value filters)
  • Search filters
  • Date filters


10. Filtering with the Slicer Tool

A Slicer is a visual filter button set.

To add a slicer:

  1. Click PivotTable
  2. Go to PivotTable Analyze → Insert Slicer
  3. Select fields for filtering
  4. Click the slicer buttons to filter instantly



Featured Post

Microsoft Excel - Working with Large sets of excel data

📘 Microsoft Excel – Working with Large Sets of Data Managing large datasets in Excel becomes easier when you use the right tools. Below ar...