Friday, September 13, 2024

OIC Gen 3 - Data Stitch action

Data stitch operation types:

  1. Assign : suppose , you use a invoke inside a scope and want to take the invoke variable data to outside of the scope, that case mainly we use assign.
  2. Append : suppose, we have a scenario , where we need to incrementally build a message payload or partial update of the payload.
  3. Remove : to remove some data from an existing payload.


Assign:

OIC - Global variable and Data Stitch action using assign operation

Append:

OIC - Append operation in Data stitch action | Incrementally build a messags payload using Data stitch

Remove:

TBD


Thursday, September 12, 2024

OIC Gen3 - about actions

 How to Add actions:

  1. Open the actions pane
  2. Use the inline Menu
List of actions available:



OIC Gen 3 - Working with XSLT constructors

If we open toggle function pallet, by default we would not see the XSL constructors, we have to click the XSLT button to enable to see it.



If we drag the xsl statement and hover or drop it in front of the element, then it will be added as child component of the element.



If we drag the xsl statement and hover or drop it at the back of the element name, then it willbe added as parent component of the element.



Note:

  • Select If statement to specify single condition
  • use choose when otherwise to specify multiple conditions. 
  • For looping logic, use for-each statment. 
  • We can also use xslt output >> literal, text, attribute, copy-of and value- of  to set default values in the mapper. 
  • Using a copy-of element, we can perform a deep copy of elements from source to target with a copy-of constructor if both have same set of elements.


Xslt code editor use cases:
  • Create internal variables
  • Correlate multiple data  sources grouped by key field using the xsl for each group constructor.
  • Create target name value pairs.
  • Implement push style xslt (call template, apply template)
  • Write custom xslt functions
  • Copy node sets

Edit xslt code in oracle jdeveloper:
  • Export the code.
    • Design the integration flow logic
    • Open the empty map action
    • Map one data value from each required structure to the target
    • Validate and clodw the mapper
    • Save and the close the integration
    • Export the entire integration archive (.iar file), for projects export the .car file.
  • Import the iar file oracle jdeveloper
    • Create an OSB application and project
    • Import the integration archive to the osb project.(service bus resources >> select zipped or archived files >> selct file >> import.
  • Locate and open to edit the .xsl file
  • Import the .xsl file(not the entire integration) back to the oic mapper.

OIC Gen 3 - Error handling | OIC Error Hospital | Global and scope level fault handlers | Fault and End actions

Why we need to implement error handling / designing beyond the happy path:

  • What happens When don't implement fault handling logic of any kind. By default every error occurs runtime, deliver to the OIC error hospital which is part of the oic runtime environwmnt, hosting all your deployed integrations, this can include any faults like runtime or business faults, request timeouts, invalid payloads, internal error etc.
  • When error hospital catches a fault,  the integration flow stops or terminated immediately
  • If you work on the projects, it will show on the observe page.
  • If the integration is sync app driven and fault happens, the error hospital sends the same fault details to the client.
  • All the errors are visible in the visibility secrion >> error page.

Instead of allowing the error hospital to catch every fault, we can intentionally catch all fault using the global and scope fault handlers.

Best practice Examples of error handling:

  1. Log the error but continue with the integration flow.
  2. Invoke a secondary service for backup processing.
  3. Log the error and then terminate the integration flow
  4. Invoke another service for notification or error handling processing.
  5. Reply to the integration glow's client with a custom error response.
  6. Send an email notification to an external stakeholder or an internal administrator.
  7. Invoke an OIC  process to initiate a process workflow involving manual intervention.

Implementing Global Fault handler:

  1. By default, there is default global fault handler, where it has re-throw fault action which rethrows the fault to OIC error hospital.
    1. All uncaught faults and errors go to global fault handler
    2. Throw new fault from scope fault handler
    3. Re throw fault from scope fault.handler.
  2. Change the default handling logic apt for your integration like 
    1. add logger, 
    2. email notification.
    3. Map custom data for a fault return for sync integrations
    4. Mitigate the error contion with alt logic
    5. Invoke an error handling service(another oic integration or external service)

Global fault handlers End actions;

  1. Re throw fault : it will send the fault to error hospital, 
  2. Throw new fault: it will send a custom fault details to error hospital
  3. Stop:  Terminate the integration flow, no error hospital will be involved. Separate fault handling logic wii be there before the stop action.
  4. Fault return:  explicitly sending the business faults to the client for sync service. No error hospital is involved.
  5. Return : fault mitigation logic, invoking a backup service, then send the normal.data to the client. No error hospital is involved.

For each invoke , try to create a scope and define scope fault handler.


Faults occurring within handlers are caught up by the next higher handler.

Inner scope >> outer scope >> global >> error hospital


Scope or global fault objects will be created to fetch fault details in the failt handler.


Note: 

For my project, I have followed as below:

  1. For each invoke>> take it in one scope and in the default fault handle >> throw new fault with code, reason and details.
  2. In global fault handler >> we have send the the global failt details to data dog SaaS application for further support.
  3. We have created a lookup like common_error_details_lookup where it captures the key, error type, error code, reason and details. Using the key like 'UCMUPLOADFAILED' we can fetch custom error details.



Friday, August 30, 2024

OIC FTP - FTP adapter write to file - File name pattern not taking the pattern specified in the mapper

Case study: 

I want to create a dynamic file name to be written to FTP folder but the dynamic file name provided in the mapper is not used by OIC. 

The file name has been mapped in the ftp mapper but the file that is written to FTP uses the file name pattern specified in FTP adapter write file action.

Example

file name pattern under write file.action: demo.txt

File nane in the mapper under properies : newfilename

during run output filename came: demo.txt

But expected is : newfilename


Screenshots what we did:

FTP write adpater configure page:


File name mapped in the Mapper 



Solution steps:

We should update the file name and directory name under OutboundFTPHeaderType instead in the properties filename.



Monday, August 26, 2024

OIC HCM - How to schedule and download HCM extract and send to downstream applications using Oracle HCM Cloud Adapter

Usecase: Here , we will see how to schedule and download HCM Extract and send to downstream applications using Oracle HCM cloud adpater.

High level:

We will create 3 integrations and achieve our goal as follow:

  1. One scheduler integration: This will only be used to schedule and call 2nd common extract integration and pass Effective Date name value pairs.
  2. Common HCM extract integration: This oneway appdriven integration will do the following
    1. FlowActionService >> SubmitFlow : will extract the data to ucm
    2. getFlowTaskInstanceStaus : it will show extract status
    3. If extract SUCCEEDED, get IntegrationContentId
    4. Call main integration and pass the content id.
  3. Main Integration:
    1. GET_FILE from ucm using dDocName as content id and using GenericSoapPort wsdl service
    2. Decode the file
    3. Transformed the file to downstream specific format.
    4. Call downstream app and send the file
Integration names used:
  1. INT<Interfaceno>_<Source>To<Target>_interfacedetails_Sch_V1
  2. Common_HCM_Extract_Trigger_V1
  3. INT<Interfaceno>_<Source>To<Target>_interfacedetails_Main_v1

Detailed steps:

1st Schedule integration:

Step1: Schedule param:
  • LastRunDate : ""
  • Schedule_EffectiveDate : "YYYY-MM-DD"

Step2: Assign required variables:
  • varInterfaceName: INT332
  • var_ExtractDelayinSeconds: 60
  • var_ParameterName : 'Effective Date'
  • var_ParameterValue: ns94:format-dateTime(fn:curent-dateTime(),'[Y001]-[M01]-[D01]')

Step3: Switch and check
If $schedule_effectivedate = ns74:format-dateTime($schedule_effectivedate,'[Y001]-[M01]-[D01]') or 
$schedule_effectivedate = 'YYYY-MM-DD'

Step4: INVOKE OIC common extract integration.
and map below details:
  • Interfacename
  • Processname
  • Correlationid as instanceid
  • Extractdelayinseconds
  • Lastrundate
  • Extract params(effective date name and value)
Otherwise: Throw new fault:
Invalid date format input for effective date.

Step5: From Default fault handler, rethrow fault and from Global fault >> send the fault details to Data dog app or notification as per requirement.


Common HCM extract integration:

Step1: Rest Post trigger 
Payload:
{
"interfaceName":"",
"processName":"",
"correlationId":"",
"extractDelayInSeconds":"",
"sequenceNumber":"",
"lastRunDate":"",
"extractParameters":{
"parameter":[{"name":"","value":""}]},
"extractIntegrationName":[{"integrationName":""}]}

Step2: SubmitFlow:

Select created oracle hcm cloud adapter >> select Query. Creat, update or delete information >> select Service: FlowActionsService >> Operation: submitFlow

Map the following details:
  1. Flowname
  2. Pass the parameters (Effective Date and value)
  3. FlowInstanceName: fn:concat($var_IntegrationName,fn:current-dateTime())
  4. LagislativeDateGroup
  5. RecurringFlag: false 






Step3: take a switch and check if submitFlow result = true

Step4: Take a while loop and chek the getFlowTaskInstanceStatus till extract completed true.
var_ExtractCompleted= 'false'

Select created oracle hcm cloud adapter >> select Query, Creat, update or delete information >> select Service: FlowActionsService >> Operation: getFlowTaskInstanceStatus

And map the following:
  1. flowInstnaceName
  2. flowTaskInstanceName : flowname
  3. legislativeDataGroupName







Step5: take a switch and check getFlowTaskInstanceStatus >> result >> 'COMPLETED' Then 

Assign as below:
var_ExtractCompleted : true

Otherwise: nothing.


Step6: for each extract, repeating element: extractIntegrationName.  Check content id.

Select created oracle hcm cloud adapter >> select Query, Creat, update or delete information >> select Service: FlowActionsService >> operation: getIntegrationContentId

Pass the following:

  1. flowainstanceName
  2. flowTaskInstanceName : Flow name
  3. legislativeDataGroupName
  4. IntegrationName






Step7: take a stage and write the content id and process id

Step8: take a assign and store contentid response status.

Step9: take a switch and check if contentid status = 'SUCCEEDED'
THEN 
Take a logger and log always
Concat('Status of this content id:'ContentId,'is ', Status)

Otherwise throw new error.

Step10: read the contentid and process id using stage.

Step 11:  call the next main integration and pass below fields:

  1. interfaceName
  2. processName
  3. correlationId
  4. SequenceNumber
  5. Ucmcontent details: 
    1. content id
    2. processs id
  6. lastRundate
  7. Connectivity properties:
    1. Localintegration
      1. Code : fetched from lookup
      2. Version: fetched from lookup

Main Integration:


Step1: rest trigger

Payload:
{
"interfaceName":"",
"processName":"",
"correlationId":"",
"sequenceNumber":"",
"UCMContentDetails":[{"ContentId":"","ProcessId":""}],
"flowInstanceName":"",
"lastRunDate":""
}


Step2: Download file from ucm:








Step3: read the in segments and Write the file using stage and send to downstream apps





Sunday, August 11, 2024

SQL working with dates

 How to use Dates, Timestamps and intervals in Oracle databases.

Data types available:

  • Date
  • Timestamp
    • Timestamp
    • Timestamp with time zone
    • Timestamp with local time zonr
  • Interval
    • Year to month
    • Day to second

Date:

The DATE datatype is used by Oracle to store all datetime information where a precision greater than 1 second is not needed. Oracle uses a 7 byte binary date format which allows Julian dates to be stored within the range of 01-Jan-4712 BC to 31-Dec-9999 AD.

Example (10-JUL-2004 17:21:30)


Date functions:

SYSDATE:

Returns the current date-time from the operating system of the database server.

SELECT SYSDATE 
FROM   dual;

SYSDATE
-------------------
10/07/2004 18:34:12

1 row selected.

CURRENT_DATE

ADD_MONTHS(date, months)

LAST_DAY(date)

MONTHS_BETWEEN(date, date)

NEXT_DAY(date, day)

NEW_TIME(date, timezone1, timezone2)

TO_CHAR(date, format):

Converts a specified date to a string using the specified format mask. If the format mask is omitted the NLS_DATE_FORMAT value is used.

SELECT TO_CHAR(SYSDATE, 'DD/MM/YY HH24:MI') 
FROM   dual;

TO_CHAR(SYSDAT
--------------
10/07/04 18:48

1 row selected.

For different parameters or format mask:

https://www.techonthenet.com/oracle/functions/to_char.php

TO_DATE(date_string, format):

Converts a specified string to a date using the specified format mask. If the format mask is omitted the NLS_DATE_FORMAT value is used.

SELECT TO_DATE('10/07/2004 13:31:45', 'DD/MM/YYYY HH24:MI:SS') 
FROM   dual;

TO_DATE('10/07/2004
-------------------
10/07/2004 13:31:45

1 row selected.

ROUND(date, format):

Returns a date rounded to the level specified by the format. The default value for the format is DD, returning the date without the fractional (time) component, making it represent midnight on the specified date, or the following date depending on the rounding.

SELECT SYSDATE, 
       ROUND(SYSDATE, 'HH24') 
FROM   dual;

SYSDATE             ROUND(SYSDATE,'HH24
------------------- -------------------
10/07/2004 18:54:24 10/07/2004 19:00:00

1 row selected.

TRUNC(date, format):

Returns a date truncated to the level specified by the format. The default value for the format is DD, truncating the fractional (time) component, making it represent midnight on the specified date. Using the TRUNC function allows comparison of dates without the time components distracting from the true meaning of the comparison. It is similar to the round function, except that it always rounds down.

SELECT SYSDATE, 
       TRUNC(SYSDATE, 'HH24') 
FROM   dual;

SYSDATE             TRUNC(SYSDATE,'HH24
------------------- -------------------
10/07/2004 18:55:44 10/07/2004 18:00:00

1 row selected.

TIMESTAMP

The TIMESTAMP datatype is an extension on the DATE datatype. In addition to the datetime elements of the DATE datatype, the TIMESTAMP datatype holds fractions of a second to a precision between 0 and 9 decimal places, the default being 6. There are also two variants called TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE. As their names imply, these timestamps also store time zone offset information.

Example (10-JUL-2004 17:21:30.662509 +01:00)


Select ...

From ...

Where datetime_col > to_date('16-Feb-2023', 'DD-Mon-YYYY')



Reference:

https://oracle-base.com/articles/misc/oracle-dates-timestamps-and-intervals

https://youtu.be/P0qVs5ukKUk?si=Kt6WUk3sz1-tIz00

Featured Post

11g to 12c OSB projects migration points

1. Export 11g OSB code and import in 12c Jdeveloper. Steps to import OSB project in Jdeveloper:   File⇾Import⇾Service Bus Resources⇾ Se...