Thursday, July 15, 2021

Fine grained APIs vs Coarse grained APIs

In coarsed grained API, your data is typically housed in a few large components while a fine grained API spreads it across a large number of smaller components. 

If your components are equal in size but vary in complexity and feature, this could lead to a coarse grained granularity. To build a fine grained API, you divide your components based on the cohesiveness and coordination of their functionalities.

Following considaration will you to pick the right granularity level:

1. Reusability: Fine grained win.

Since more information is spread across a large number of APIa, they typically offer greater reusability than their coarse grained counterparts.

2. Scalability: fine graned win.

Fine grained APIs are designed to easily scale and improve the performance of your APIs.

3. Security & analytics: Fine grained win.

A fine grained approach enables you to provide security at a more granular level. It also collects detailed analytics to help resolve prod issues.

4. Management Overhead: Coarse grained win

Fine grained means you end up with managing more APIs, it typically increases overhead 

5. Ability to deploy: fine grained win

Since a coarse grained API is more destructive, it is more difficult to move changes and advance new functionalities towards production.

6. Agility & innovation: fine grained win.

7. Resource usage: coarse grained win.

Fine grained ressources consume infrastructure resources at a faster pace than coarse grained API.

8. Complexity: fine grained win

With a fine grained approach complexity at your node or leave leavel is simple.

9. Performance: tie

With a fine grained API,you only expose the information needed by the client. Therefore saving bandwidth, offers reliable performence, risk slowing down the client application since they will need to make multiple calls to get the information they need. With a coarse grained approach clients make fewer calls for the information they need. But it comes with the risk of potentially exposing unwanted data.

10. Latency: fine grained.

Tuesday, July 13, 2021

OIC - Schedule parameters to get last run Date and Time

Introduction of schedule parameters:
  • Scheduled parameters are available across all scheduled runs of an integration and can be used to facilitate processing of data from one run to the next run for example, When performing batch processing, a scheduled parameter can be used to track the current position of batched data between runs.
  • Max 5 variables can be added.
  • Last time and date of the schedules integration to avoid the duplicate processing of data.

UseCase: We will create a schedule parameter lastRunDateTime and initialize with '' and log the last run date time and update the lastrundatetime with the schedulestartTime.

Detailed Steps:

Step1: Create a schedule orchestration integration and clicked on the Schduled icon and select Edit.

Step2: In the parameter name, click plus icon and enter a name lastRunDateTime and default value like ''. And close it.

Step3: drop logger activity and select Always as radio button and click on edit expression icon and below expression:
concat("last run date and time: ", $lastRunDateTime)

Step4: From the actions, add a assign activity and select the created selected parameter "lastRunDateTime" and add the value as startTime of the schedule. Validate and close.

Step5: add the tracking , save and activate and test

Run 1st time and see , lastRuntime is empty

Run 2nd time and see, lastRuntime is last start time of the schedule.

Run again and see, lastRunDateTime is the last start time of the schedule.

Run again and we can also update the lastRunDateTime.

Monday, July 12, 2021

OIC - log file naming convention

In the integrations, in error handling we need to create a log file to store the error details in a computing directory for support perspective. Basically we follow the below log file naming convention.

Naving convention:

<Integration name>_<current date yyyy-mm-dd><hours from date time>.<minutes from date time>.<seconds from date time>.log

In assign


For example:


OIC - Integration Metadata Access

Many times we want to use the name of the integration, its version, Identifier and Instance id ans other environment and runtime specific  information inside the OIC integration flow. With this new feature called "Integration Meta data" we can dynamically fetch these details and we dont need to hardcode them.

Following metadata are available:


  • Name
  • Identifier
  • Version
Self-runtime data:
  • Instance ID
  • Invoked by name
Self-environment data:
  • Service instance name
  • Base URL
Where we can use them:
These read only fields can be used in any orchestration like Assign, Log activity, Notificaton activity etc.

Steps to use meradata:

Step1: Create a new integration or edit an existing integration flow

Step2: Add a new Log action

Step3: Edit the log message and in the source tree you can see the list of metadata. Drag and drop the required metadata to the expression builder

Step4: Save and activate the integration

Step5: Trigger the integration flow using the endpoint. Go to Monitoring > Tracking page. Open the particular run and click on 'View Activity Stream' and you should see the Log message which logs the integration name etc.

Sunday, July 11, 2021

OIC - Common Integration Pattern Pitfalls and Design Best Practices

Common Integration Pattern Pitfalls and Design Best Practices:

Note the following best practices and integration pattern pitfalls to avoid when designing an integration.

  • Avoid Common Integration Pattern Pitfalls
  • Avoid Creating Too Many Scheduled Integrations
  • Synchronous Integration Best Practices
  • Design Long-Running or Time-Consuming Integrations as Asynchronous Flows
  • Time Outs in Service Calls During Synchronous Invocations
  • Parallel Processing in Outbound Integrations

Avoid Common Integration Pattern Pitfalls

  • Chatty Integrations
  • Scheduled Job that Never Stops Trying to Process
  • lmport an Externally Updated IAR File
  • Synchronous Integration Doing Too Much
  • Too Many Connections in an Integration
  • Read Files with Many Records'
  • Integrations Running Unchanged Despite Changing Business Needs

1. Chatty Integrations:

Use case: 
Synchronise records in a file or large data set with an external system.

Use an invoke activity within a looping constuct to call external APIs for every record.

Why pitfall:
Downstream apps are receiving a large number of atomic requests. This puts the entire system under duress.

Best practices:
  • Leverage application capabilities to accept multiple records in a single request.
  • Leverage adapter capabilities to send a large data set as attachements or files.
  • Use a stage file action and use apend file option to send the file to the destination.

2. Scheduled job that never stops trying to process:

Use case:
Process records within a set of files with a tight SLA.

The scheduled integration looks for all files to process and loops over all to sequentially process until no files remain.

Why pitfall:
If a large number of files exist, one run of a sheduled job executes for a long time and starves other jobs and may get terminated by the framework.

Best Practice:
  • Limit the number of files to process in a single scheduled run.
  • Use scheduled parameters to remember the last processed file foe the next run.
  • Invoke the run now command to trigger processing of the next file if waiting for the next scheduled run is not feasible. 

3. Import an externally updated IAR file:

Use case:
Need to leverage advanced XSL constructs that may not be avaialble in the mapper.

Updating the IAR file externally and then importing it into Oracle integration.

Why pitfall:
Activation failures may occure.
This can lead to metadata inconsistency and validation failures.

Best Practice:
Use import map feature in oracle integration.

4. Synchronous Integration doing too much:

Use case: 
A request triggers complex processing involving enrichment and updates across multiple systems.

Huge synchronous integration modeling a large number of invokes/conditional logic.

Why pitfall:
Susceptible to timeouts.
Blocking call - holds resources and starves other integrationa.

Best Practice:
  • Explore moving completely to an asynchronous integration - fire and forget , async response. Thus it will alao support  resubmission of failures.
  • Optimise sync processing with a coarse grained external API to replace multiple chatty calls.
  • Split into a sync integration containing mandatory processing before sending out a response and triggering separate asyn fire and flrget integrations for other processibg logic.

5. Too many connections in an integration

Use case:
As developers create integrations, they define their own connectiona pointing to the same application. This leads to many duplicate connections.

Every developer creates their own connection using different set of configurarions/credentials.

Why pitfall:
High number of connections make manageability painful, specially when you need to update the endpoint, credentials etc. 

Best practice:
Have a custodian create needed connections and ensure duplicate connections of the same type are not created. 
Build a best practice for naming conventions and maintaining a set of configurarions.

6. Read files with many records

Use case:
Read a file with a large number of records and process individual records.

Reading the entire file in memory using the read file option and processing record by record.

Why pitfall:
Consumes large amounts of momory and impacts other system procesing.

Best practice:
Download the file to the stage location using the download file option. 
Use the read file with segments options. Platform automatically processes segments in parallel. Platform brigs in only the portions of the file to the memory, as needed.

7. Integrarions running unchanged despite changing business needs

Use case:
Integrations/schedules created during the initial implementation continue to run even though your business requirements have changed over time.

Integrations and scheduled jobs created during the initial product implementation are never re-evaluated against changing business needs.

Why pitfall:
Unnecessary runs of jobs that handle no work.
Clutter with dead integrations, life cycle management overheads and developer confusion.

Best practice:
Periodically analyze existing integrarions or schedules against current business needs.
Deactivate integrations that are no longer needed.


Reference: Common Integration Pattern Pitfalls and Design Best Practices

django - webpage to input text and remove punctuation from it.

from django.contrib import admin
from django.urls import path
from . import views
urlpatterns = [
path('',views.index, name="index"),


def index(request):
# return HttpResponse("Hello")
return render(request,'index.html')

from django.http import HttpResponse
from django.shortcuts import render

def removePunc(request):
if check == "on":
puncList= '''!()-[]{};:'"\,<>./?@#$%^&*_~'''
for char in text:
if char not in puncList:
newText=newText + char

return HttpResponse(newText)
return HttpResponse(text)


<!DOCTYPE html>
<html lang="en">
<meta charset="UTF-8">
<title>template is working</title>
<h1>Welcome to text analyzer. please enter your text.</h1>
<form action="/removePunctuation" methlod="get">
<textarea name="text" style="margin: 0px; width: 1307px; height: 111px;"></textarea><br/>
<input type="checkbox" name="removepunc">Remove Punctuation<br/>
<button type="submit">Analyze Text</button>

Web pages:

You can create another template and show the removed punctuation text there.

<!DOCTYPE html>
<html lang="en">
    <meta charset="UTF-8">
    <title>Analyzing Your Text...</title>
<h1>Your Analyzed Text - {{ purpose }}</h1>
    {{ analyzed_text }}


params = {'purpose': 'Removed Punctuations', 'analyzed_text': newText}
        return render(request, 'analyze.html', params)

django - webpage to have personal navigations

from django.contrib import admin
from django.urls import path
from . import views
urlpatterns = [
path('',views.index, name="index"),

from django.http import HttpResponse
from django.shortcuts import render

def navigationBar(request):
return render(request, 'navigationUrls.html')


<!DOCTYPE html>
<html lang="en">
<meta charset="UTF-8">
<title>Personal Navigation</title>
<h1>Personal Navigations:<br/></h1>
<a href="" target="_blank">My Blog Page</a><br/>
<a href="" target="_blank">Facebook Login Page</a><br/>
<a href="" target="_blank">Twitter Page</a><br/>
<a href="" target="_blank">Instagram Page</a>


Featured Post

11g to 12c OSB projects migration points

1. Export 11g OSB code and import in 12c Jdeveloper. Steps to import OSB project in Jdeveloper:   File⇾Import⇾Service Bus Resources⇾ Se...