Wednesday, January 29, 2025

Signing vs Verification keys

When it comes to signing and verifying, the distinction lies in how public and private keys are used in asymmetric cryptography (like RSA or ECC). Here's the breakdown:

Signing

Purpose: To prove the authenticity of the data and the identity of the signer.

 Key Used: Private Key.

Explanation: When you "sign" something (like a document or message), you use your private key to create a signature. This ensures that only you, the holder of the private key, could have signed it, guaranteeing the data's integrity and the signer's identity.

Verifying

Purpose: To confirm that the data hasn't been altered and was indeed signed by the entity claiming to have signed it.

Key Used: Public Key.

Explanation: When you "verify" a signature, you use the public key of the signer to check the signature's validity. The public key allows anyone to verify that the signature matches the signed data, but it doesn't let anyone create a sig nature themselves.

Where to Use Each Key:

 Private Key: Used when signing. It should be kept secure and never shared, as anyone with access to the private key could sign data as though they were you.

Public Key: Used when verifying a signature. This key is shared publicly, allowing others to confirm the authenticity of the signed data without compromising security.

In summary:

  1. Sign with your private key.
  2. Verify with the public key.

Mutual Signing and Verification Between Two Parties (A & B)

1. A → B (Signed Message)

A signs the message using A’s private key.

B verifies the message using A’s public key.

2. B → A (Signed Response)

B signs the response using B’s private key.

A verifies the response using B’s public key.


Sunday, January 26, 2025

OIC - Extract Microsoft 365 Outlook Email Attachments and upload them to OCI Object storage

Use Case:

A client has a requirement to automate the processing of email attachments received in their Microsoft 365 Outlook inbox. Currently, they manually download the attachments from emails and upload them to Oracle Cloud Infrastructure (OCI) Object Storage for archival and further processing. This process is time-consuming and prone to errors.

The client needs a solution where the attachments are automatically extracted from specific emails (based on criteria like sender, subject, or date) and uploaded to a designated OCI Object Storage bucket. This ensures seamless and timely processing of files while reducing manual effort and improving efficiency.

Design steps:

We need to follow these steps to implement the solution:

  1. Configure the Mailbox: Set up the mailbox and create a connection in OIC using the Microsoft Office 365 Outlook adapter.

  2. Set Up Object Storage Connection: Establish a connection to OCI Object Storage in OIC.
  3. Design a Scheduled Integration: Create a scheduled integration in OIC and implement the following steps:
    1. Fetch emails from the configured mailbox.
    2. Read the email messages.
    3. Extract the attachments from the emails.
    4. Upload the extracted attachments to OCI Object Storage.
Integration flow:



Steps involved in integration flow:

Step1: Fetch the email messages from the configured mailbox
Select method: Get Messages

Step2: Loop over the fetched Email messages and check for email messages having attachments


Step3: Fetch the email attachments using the method: "Get an Attachment Collection" of the MS outlook adapter.


Map the message id template parameter to fetch the attachments of the email message.


Step4: Loop over the fetched attachments and upload each attachment to object storage bucket using the PUT operation of the Rest API.

Step5: Map the input content file and object name. Decode to stream reference using decodeBase64ToReference() function. Bucket name, namespace name can be passed from a lookup. 



Configure Microsoft office 365 outlook adapter connection in OIC:




Create a connection to the Object Storage

Either use the REST endpoint using the Object Storage Service API — https://docs.oracle.com/en-us/iaas/api/#/en/objectstorage/20160918/ 

you can follow my blog

https://soalicious.blogspot.com/2022/08/oic-how-to-use-oci-object-storage-from.html

or configure your Oracle Integration instance using the steps in this link — https://docs.oracle.com/en/cloud/paas/application-integration/integrations-user/add-actions-app-driven-orchestration-integration.html#GUID-822226B0-B8EB-42E0-B053-8D844D2F45DB to access Object Storage using OCI Object Storage Action.


For mailbox setup follow this below Medium post: It has involved two steps:
  1. Create a microsoft office 365 outlook account with custom domain
  2. Register the application in Azure to create client Id and client secret.

Thursday, January 16, 2025

OIC - how can I use XSLT functions to remove leading zeros from numeric and alphanumeric fields?

To remove leading zeros from an numeric field in Oracle Integration Cloud (OIC) using XSLT, you can Use number() Function

The number() function automatically converts a string with leading zeros into a numeric value, effectively removing the leading zeros.

<xsl:value-of select="number(input_element)" />

Example:

Input: "000123"

Output: 123

To remove leading zeros from an alphanumeric string in Oracle Integration Cloud (OIC) using XSLT, you can use the replace() function in combination with regular expressions. Here's how you can achieve it:

Explanation:

1. Input Element: Replace your_input_element with the XPath to the input value.

2. Regular Expression:

^0+ matches one or more zeros (0) at the start (^) of the string.

3. Replace Function: The replace() function removes the matched zeros by replacing them with an empty string ('').

Input Example:

If the input is 00123ABC, the result will be 123ABC.

Xslt code:

<xsl:template match="/">

    <result>

        <xsl:value-of select="replace(your_input_element, '^0+', '')"/>

    </result>

</xsl:template>



Wednesday, January 15, 2025

OIC - Splitting Fixed-Length File Based on batch header Terminal Numbers into 2 separate files using xslt mapping.

Use Case: 

OIC - Splitting Fixed-Length File Based on batch header Terminal Numbers into 2 separate files using xslt mapping.

In integration workflows, processing fixed-length files is a common requirement. A typical fixed-length file might contain hierarchical data structured as:

  • 1. File Header: Represents metadata for the file.
  • 2. Batch Header: Denotes the start of a batch, including terminal-specific identifiers (e.g., 001 or 002).
  • 3. Detail Records: Contains individual transaction or data entries for each batch.
  • 4. Batch Trailer: Marks the end of a batch.
  • 5. File Trailer: Marks the end of the entire file.

Problem Statement:

Given a fixed-length file structured as above, the requirement is:

Identify Batch Headers containing specific terminal numbers (e.g., 001, 002).

Split the file into separate outputs based on these terminal numbers.

Transform each split batch into a target file format for further processing.

Example Input File:

File Header  

Batch Header (001 Terminal)  

Detail  

Detail  

Batch Trailer  

Batch Header (002 Terminal)  

Detail  

Detail  

Batch Trailer  

File Trailer

Expected Output:

File 1: Contains data related to 001 terminal.

Batch Header (001 Terminal)  

Detail  

Detail  

Batch Trailer  

File 2: Contains data related to 002 terminal.

Batch Header (002 Terminal)  

Detail  

Detail  

Batch Trailer  

Solution Overview:

1. File Parsing: Read the the fixed-length file as csv sample file. 

2. Get batch header position: identify the positions of Batch Headers with terminal numbers 001 and 002.

3. Splitting Logic: Extract data between Batch Header and Batch Trailer for each terminal number 001 and 002 respectively using the positions fetched in step2.

4. Read splited fixed length files: using nxsd, read the files.

3. Transformation: Convert the split content into the desired target file format (e.g., XML or JSON).

4. Output Generation: Write the transformed content into separate output files.

This solution ensures modular processing of hierarchical data, enabling seamless integration into downstream systems.


Xslt code Used for getting the batch header position for 001 and 002:

<xsl:template match="/" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xml:id="id_11">

    <nstrgmpr:Write xml:id="id_12">

        <ns28:RecordSet>

            <xsl:variable name="CircleKPosition">

                <xsl:for-each select="$ReadSourceFile/nsmpr2:ReadResponse/ns26:RecordSet/ns26:Record" xml:id="id_48">

                    <xsl:choose>

                        <xsl:when test="contains(ns26:Data, &quot;RH&quot;) and (substring(ns26:Data, 23, 3) = &quot;001&quot;)">

                            <xsl:value-of select="position()" />

                        </xsl:when>

                    </xsl:choose>

                </xsl:for-each>

            </xsl:variable>

            <xsl:variable name="VangoPosition">

                <xsl:for-each select="$ReadSourceFile/nsmpr2:ReadResponse/ns26:RecordSet/ns26:Record" xml:id="id_48">

                    <xsl:choose>

                        <xsl:when test="contains(ns26:Data, &quot;RH&quot;) and (substring(ns26:Data, 23, 3) = &quot;002&quot;)">

                            <xsl:value-of select="position()" />

                        </xsl:when>

                    </xsl:choose>

                </xsl:for-each>

            </xsl:variable>

            <ns28:Record>

                <ns28:CircleK>

                    <xsl:value-of select="$CircleKPosition" />

                </ns28:CircleK>

                <ns28:Vango>

                    <xsl:value-of select="$VangoPosition" />

                </ns28:Vango>

            </ns28:Record>

        </ns28:RecordSet>

    </nstrgmpr:Write>

</xsl:template>


Xslt code for spliting for 001 file : same way we have to do for 002.

<xsl:template match="/" xml:id="id_175">

    <nstrgmpr:Write xml:id="id_17">

        <ns31:RecordSet xml:id="id_56">

            <xsl:choose xml:id="id_59">

                <xsl:when test="number($WriteBatchHeaderPositions_REQUEST/nsmpr3:Write/ns32:RecordSet/ns32:Record/ns32:CircleK) &lt; number($WriteBatchHeaderPositions_REQUEST/nsmpr3:Write/ns32:RecordSet/ns32:Record/ns32:Vango)" xml:id="id_60">

                    <xsl:for-each select="$ReadSourceFile/nsmpr2:ReadResponse/ns28:RecordSet/ns28:Record[position() &lt; number($WriteBatchHeaderPositions_REQUEST/nsmpr3:Write/ns32:RecordSet/ns32:Record/ns32:Vango)]" xml:id="id_61">

                        <ns31:Data>

                            <xsl:value-of select="ns28:Data" />

                        </ns31:Data>

                    </xsl:for-each>

                </xsl:when>

                <xsl:when test="number($WriteBatchHeaderPositions_REQUEST/nsmpr3:Write/ns32:RecordSet/ns32:Record/ns32:CircleK) &gt; number($WriteBatchHeaderPositions_REQUEST/nsmpr3:Write/ns32:RecordSet/ns32:Record/ns32:Vango)" xml:id="id_62">

                    <xsl:for-each select="$ReadSourceFile/nsmpr2:ReadResponse/ns28:RecordSet/ns28:Record[position() &gt;= number($WriteBatchHeaderPositions_REQUEST/nsmpr3:Write/ns32:RecordSet/ns32:Record/ns32:CircleK)]" xml:id="id_63">

                        <ns31:Data>

                            <xsl:value-of select="ns28:Data" />

                        </ns31:Data>

                    </xsl:for-each>

                </xsl:when>

            </xsl:choose>

        </ns31:RecordSet>

    </nstrgmpr:Write>

</xsl:template>

Screenhots:

For getting batch header positions


For splitting the content.



Eee

Wednesday, January 8, 2025

OIC - difference between SOAP and REST

Difference between SOAP and REST:

SOAP (Simple Object Access Protocol)

Protocol: A strict protocol for message exchange with built-in standards for security, transactions, and error handling.

Message Format: Always uses XML, requiring a structured SOAP envelope with headers and body.

Transport: Can use multiple transport protocols like HTTP, SMTP, TCP, etc.

Features:

Supports WS-Security for secure message exchange.

Better suited for stateful operations (e.g., sessions).

Heavy and slower due to XML parsing and strict standards.

Use Cases: Enterprise applications requiring high security, reliability, and formal communication (e.g., financial transactions, payment gateways).


REST (Representational State Transfer)

Architecture: A lightweight architectural style, not a protocol, that uses standard HTTP methods (GET, POST, PUT, DELETE).

Data Formats: Supports multiple formats like JSON, XML, HTML, or plain text (JSON being the most common).

Transport: Relies exclusively on HTTP.

Features:

Stateless, meaning each request is independent of others.

Faster and more efficient due to lightweight communication and flexible data formats.

Easy to implement and widely used for modern APIs.

Use Cases: Web and mobile applications, microservices, and scenarios where performance and scalability are priorities.

In summary:

SOAP is rigid, heavy, and ideal for complex, secure operations.

REST is simple, flexible, and optimized for web-based services.



OIC - Can we send XML input in rest adapter?

Yes, the REST adapter in OIC can handle XML input.

  1. Set the media type to application/xml.
  2. Define an XML schema (XSD) for payload mapping.
  3. Include Content-Type: application/xml in the request header.
  4. Test with XML payloads using Postman or curl.

Ensure proper configuration and schema matching for seamless processing.

Screenshots:

TBD


OIC - Use of nested DVMs or lookups to implement conditional logic | Enhancing Error Handling in OIC with Conditional DVM Lookups

Here, we will demonstrate how nested DVMs or lookups can be utilized to implement conditional logic effectively.

Use Case:

Error handling is a critical component in integrations. In Oracle Integration Cloud (OIC), different error codes might require specific severity levels for better classification. For example:

Specific error codes from external systems (e.g., ERROR_CODE) map to severity levels like CRITICAL, HIGH, or LOW.

If an unknown or generic error occurs, it defaults to a predefined severity level (e.g., NA).

To achieve this, we use nested DVM lookups to dynamically retrieve severity levels based on the error context.

Code Explanation

The provided code handles dynamic severity mapping based on two levels of logic:

1. Primary Lookup: Retrieves the SEVERITY based on the error code received (ERROR_CODE) from the payload.

2. Fallback Logic: If the error code is missing or invalid, a second lookup retrieves a default severity (GENERICOICTECHNICALERROR) to ensure robustness.

Expression Used:

dvm:lookupValue("Common_Error_Details_Lookup","ERROR_CODE",$GlobalFaultObject/ns0:fault/ns0:errorCode,"SEVERITY",dvm:lookupValue("Common_Error_Details_Lookup","KEY","GENERICOICTECHNICALERROR","SEVERITY","NA"))


How It Works:

1. Outer Lookup:

File: Common_Error_Details_Lookup.

Uses ERROR_CODE from the payload ($GlobalFaultObject/ns0:fault/ns0:errorCode).

Retrieves the corresponding SEVERITY.

2. Inner Lookup (Fallback):

File: Common_Error_Details_Lookup.

Key: GENERICOICTECHNICALERROR.

Retrieves a default severity (SEVERITY) if the primary lookup fails or returns null.

Key Benefits

1. Dynamic Mapping: Avoids hardcoding severity levels by using reusable DVM files.

2. Robust Error Handling: Implements a fallback mechanism for unknown or generic errors.

3. Centralized Control: Simplifies maintenance by managing mappings in one place (DVM).

4. Seamless Integration: Ensures consistent error handling across systems and reduces potential failures.

Practical Example

DVM File Structure


Conclusion

This approach highlights how OIC's DVM lookup capability can be extended to include conditional logic, ensuring robust and scalable error handling mechanisms in integration flows. By leveraging nested DVMs, developers can streamline error classification while minimizing manual intervention.


Featured Post

OIC - How to Retrieve Email Attachments Using Microsoft Graph API: Automating Payment File Processing

Retrieving Email Attachments Using Microsoft Graph API: Automating Payment File Storage in Object Storage Use Case: A finance team needs t...