Friday, September 20, 2019

12c SOA - SFTP configuration and FTPPlan.xml updation

Under change Center⇾click Lock &edit for doing any modifications on WebLogic server.
Under Domain structure select Deployments.
Under Configuration tab click on FtpAdapter.
 Under settings for FtpAdpter select configuration tab.
Select new in order to create new connection pool.
 Select the radio button for creating outbound connection.
Give a JNDI name for your connection and click finish.
You can see that the JNDI had been successfully created.
Select the connection pool you have created and under properties tab give a host name and press enter button so that it will save it otherwise it won’t reflect your updated host name.
Default port will be 21, update it to your port number and press enter.
Make UseSftp property as true so as to use it as SFTP else it will be FTP only and don’t forget to save the modifications.
Enter user name and password.
Once you save it, you will be redirected to properties page with following message.
 After saving go to deployments, select FtpAdapter and click update.
 Select the update radio button and click next.
 Your ftp adapter plan.xml file will be updated in following location. Click Finish.
 You will be redirected to configuration page with following message. Now activate changes.
After activating the changes, the FTPPlan.xml file will be updated with the new modifications.

Note: These modifications with FTPPlan.xml file will happen only in node 1. Thus we need to copy this file in node 2 location as well. After copying restart the servers for modifications to take effect.

12c SOA - DVM VS XREF

Introduction:
When an object flows from one system to another system, both using their own entities to represent the same type of object, transformation is required to map the entity from one system to another.

For example, when a new customer is created in a SAP application, you might want to create a new entry for the same customer in your Oracle E-Business Suite application named as EBS.

So the required functionality is that these two entities should be mapped from different domains. Even if in future, another domain gets added, again this is the requirement to represent that entity in some common entity format.
Oracle SOA offers two solutions:
  1. Domain-Value Maps (DVM)
  2. Cross References (XREFs)
Domain-Value Maps (DVMs):
  • Domain-Value Maps enable mapping of vocabulary in one domain to another, which is useful when different domains represent the same data in different ways.
  • The limitation is that DVM is a static XML file where all the mappings are already defined in design time.
  • DVM is best when it has small set of mapping values. When it is small set of values, you can directly map values between multiple domains. A direct mapping of values between two or more domains is also known as Point-to-Point mapping. Internal cache manager increases the performance tremendously in this case, as xml document always resides in runtime memory.
Click here how-to-create-dvm to get more details on DVM implementation steps.

Cross References (XREFs):
  • Using cross references, you can dynamically populate, maintain, and map values for equivalent entities created by different applications.
  • It is dynamic in nature since the values to the XREF can be populated dynamically.
  • It is stored in XREF_DATA table in the SOA_INFRA schema in SOA Dehydration store referenced by the jdbc/xref JNDI name.  
  • When creating, updating, and deleting objects in one application, there may be a need to propagate the changes to other applications. For example, when a new customer is created in a SAP application, you may need to create a new entry for the same customer in your Oracle E-Business Suite application (EBS). However, since the applications being integrated are using different entities and IDs to represent the same information you need to cross reference these entities.
  • As the cross reference data is persisted in a database, it remains available for applications until explicitly deleted, preferably by using the appropriate XREF functions.
Click here how-to-create-Cross-reference to get more details on XREF implementation steps.



12c SOA XREF(Cross Reference)

Introduction:
  • Cross references are used to dynamically populate, maintain and map values for equivalent entities created by different applications.
  • This facility uses a database table – default name is XREF_DATA – that contains records per entity with the identity values in all domains. 
  • The definition of a cross reference is stored in a file, for example Employee.xref. The file contains the name of the cross reference as well as all the End Systems (identity domains) that are mapped.
  • This Xref file is exposed from the MDS, to be used across components and composite applications. 
  • In addition to all real identity domains, it is a good idea to add a ‘canonical’ domain that contains the common or generic identifier that is used to identify the entity in all canonical, domain independent, messages that flow through the service fabric.
Use Case:
Here I will show an example of an employee entity how the employee ids are mapped between two disparate applications Finance and Planning where both have different ids referring to the same employee entity.When an employee gets created in Finance application, the same entity will be created in planning application where Planning Employee Id will be generated concatenating the Finance EmpFId and EmpFirstName. Here I will also create a canonical field which will contain a common GuId which will help to map the FINANCE and PLANNING Ids.

From Finance application:
EmpFId
EmpFirstName
EmpLastName
EmpType
To Planning application:
EmpPId
EmpName
EmpFunction

Implementation:
Step1: Create a XREF Employee






 Create Above Table Structure In SOA-INFRA DB


Step2: MODE: ADD FINANCE Employee ID & LINK PLANNING Employee Id

Create a planning application id concatenating the FINANCE Employee id and First Name and assign to the planning application.

Create a GUIId variable and store guid using oraext:generate-guid() function which will be used as the value of canonical field.


Add the FINANCE Employee id with canonical value in the XREF.

xref:populateXRefRow('oramds:/apps/soa/xref/Employee.xref','FINANCE',$inputVariable.payload/ns1:EmpFId,'CANONICAL',$varGUIId,'ADD')

LINK the created PLANNING Id into the XREF for the respective FINANCE ID using the canonical id.
xref:populateXRefRow('oramds:/apps/soa/xref/Employee.xref','CANONICAL',$varGUIId,'PLANNING',$varEmpPId,'LINK')


Test:

You can see the FINANCE employee added and PLANNING Id LINKED to XREF table.

Step3: Fetch id from XREF(if required in the implementation).
Here fetching the FINANCE Id based on PLANNING Id 

xref:lookupXRef('oramds:/apps/soa/xref/Employee.xref','PLANNING',$varEmpPId,'FINANCE',true())


Depending upon the implementation, we can also update and delete the entries in the XREF dynamically.
Looking up - by using lookupXRef()
ADD/LINK/UPDATE - by using populateXRefRow()
Delete - by using markforDelete()

Thursday, September 19, 2019

12c SOA - Auto Purging SOA Infra setups

Use the Auto Purge page in Oracle Enterprise Manager Fusion Middleware Control to schedule and execute jobs that automatically remove older flow instances, adapter reports, and fault alerts data from the database.

Step1: Under your soa-infra open the menu SOA Infrastructure⇾SOA Administration⇾Auto Purge
Step2: In the Auto Purge option you will have to define multiple sections.
Pick your "Auto Purge Job":
    • SOA Flow Purge Job 1
    • SOA Flow Purge Job 2
    • SOA In-Memory Flow Purge Job
Activate the "Enable" button
Define your "Job Schedule"
Set your "Retain Data"
Hit the Apply Button

Auto Purge Job:
Run delete_instances_auto_job1 on a schedule appropriate for weekdays (Monday through Friday at midnight). This job is automatically enabled.
Run delete_instances_auto_job2 on a weekend schedule (Saturday and Sunday) that may be more aggressive. This job is not automatically enabled.
Enabled:
Select to enable automatic database purging with the database purge job selected from the Auto Purge Job list.
The purge interval begins when it is enabled. For example, If you specify seven days in the Retain Data field, data is retained from the day you re-enabled this checkbox. Newer data is retained for seven days from when it was created.
Calendaring Expression icon:
Click to view examples of job scheduling syntax. Copy and paste the syntax appropriate to your environment into the Job Schedule field and modify as necessary. Click More Info to access documentation about how to configure job frequency by setting the repeat_interval attribute.
Job Schedule:
Specify a job execution schedule to purge instances. The default schedule is to perform a daily purge at midnight. This is a required field. To specify the schedule, use valid calendaring expressions. Click the information icon or the Calendaring Expression icon to view examples of commonly used expressions. The scheduling syntax is not case sensitive.
Purge Type:
Single: Single, loop purged script that performs a batched purge.
Parallel: Functionality is the same as the single, looped purge script. However, this option enables the dbms_scheduler package to spawn multiple purge jobs, with each job working on subset data.
Retain Data:
Specify the time interval in days for which to retain data. Data within this interval is not purged when the job runs. The default value is seven days.
Maximum Flows to Purge:
Select the maximum number of instance flows to purge in a single job run.
Batch Size:
Select the maximum number of business flows to delete at a time. The default value is 20000.
Degree of Parallel:
Select the number of job executions to run in parallel. The default value is 4.

Step3: To view and configure advanced configuration properties in the System MBean Browser, click More Auto Purge Configuration Properties.

Step4: Click PurgeJobDetails.
The two predefined database purge jobs are displayed.
delete_instances_auto_job1
delete_instances_auto_job2
Expand a job to display all properties for single and parallel purges. When either the single or parallel purge type is executed, the appropriate property values for the selected type are executed.


Scheduler Calendaring Syntax:
Specify an execution schedule or a repeat interval schedule as shown in the examples below.
Run every Day at Midnight
FREQ=DAILY; BYHOUR=00;
Run every weekday at midnight
FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=00;
Run every weekend at midnight and 05:00AM
FREQ=DAILY; BYDAY=SAT,SUN; BYHOUR=00,05;
Run every weekday 30 minutes past midnight
FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=00; BYMINUTE=30;
Run every weekend 30 minutes past midnight and 5AM
FREQ=DAILY; BYDAY=SAT,SUN; BYHOUR=00,05; BYMINUTE=30;
Run every friday at midnight
FREQ=DAILY; BYDAY=FRI; BYHOUR=0;
Run every other friday at midnight
FREQ=DAILY; BYDAY=FRI; BYHOUR=0;

Setup is done.
Note: when all the setup done, you can also execute the purge job pressing the RUN NOW button if you don't want to wait to the next scheduled execution.

Check Executions of Auto Purge Jobs:
Option1: You can check the executions of your Auto Purge Jobs from scheduler jobs under your SOAINFRA DB schema.


Option2: In order to check the executions of your Auto Purge Jobs, you can easily query the table SOA_PURGE_HISTORY under your SOAINFRA schema. Under the columns START_TIME and END_TIME you will see the start and end time of the Purge Job, under the column T you can see if your Job execution was single or parallel loop, under the column THREAD you will find the number of parallel threads and under the column S you will find the status of your Job execution, C = Completed, R = Running

Use the following query to check used space and Free space in SOA infra.
Query:
SELECT USED.TABLESPACE_NAME, USED.USED_BYTES AS "USED SPACE(IN GB)",  FREE.FREE_BYTES AS "FREE SPACE(IN GB)"
FROM
(SELECT TABLESPACE_NAME,TO_CHAR(SUM(NVL(BYTES,0))/1024/1024/1024, '99,999,990.99')
 AS USED_BYTES FROM USER_SEGMENTS GROUP BY TABLESPACE_NAME) USED
INNER JOIN
(SELECT TABLESPACE_NAME,TO_CHAR(SUM(NVL(BYTES,0))/1024/1024/1024, '99,999,990.99')
 AS FREE_BYTES FROM  USER_FREE_SPACE GROUP BY TABLESPACE_NAME) FREE
ON (USED.TABLESPACE_NAME = FREE.TABLESPACE_NAME);


Wednesday, September 18, 2019

12c SOA - Manual purging SOA Infra

Follow the steps for purging SOA infra:

Step1: Use the following query to check the total consumed space by all tables in SOA infra.
select sum(bytes)/1024/1024/1024 "size in GB" from dba_segments where    owner='CERT_SOAINFRA' and SEGMENT_TYPE='TABLE';
Step2: This query to check how much space is consumed by each table.
SELECT SEGMENT_NAME TABLE_NAME, SUM(BYTES) / (1024 * 1024 *1024) TABLE_SIZE_GB FROM USER_EXTENTS
WHERE SEGMENT_TYPE = 'TABLE'
GROUP BY SEGMENT_NAME
ORDER BY TABLE_SIZE_GB DESC;

By using this command, we found that the table which consumes max space is Audit trail, thus we need to delete old data from this table in order to free some table space.
Step3: Use the following command to find the a set of data to be deleted. Generally we delete 50K data in batch manner.
select * from AUDIT_TRAIL where trunc(CI_PARTITION_DATE) < sysdate and rownum<=50000;
Note: You can also do the same based on count id and max partition date.
select max(CI_PARTITION_DATE) from audit_trail where COUNT_ID < 3600000

 Step4: Use the following command to delete the old records by deleting 50K records at a time.
Delete from (select * from AUDIT_TRAIL where trunc(CI_PARTITION_DATE) < sysdate) where  rownum<=50000;
Step5: Use following command to enable row movement.
ALTER TABLE AUDIT_TRAIL enable row movement;

Why we need to enable row movement?
Because by default it is disabled, enabling row movement will enable rows to move (eg: rowids to change).

Step6: Use the following command to shrink the table space.
ALTER TABLE AUDIT_TRAIL shrink space;

Why we need to shrink space?
This command helps us to recover space and amend the high water mark (HWM).

What is HWM?
The high water mark (HWM) for an Oracle table is a construct that shows the table at its greatest size.
For example, assume that you have a million row table that takes 30 seconds to read.  After deleting 900,000 rows, a full scan on the table will still take 30 seconds.
This is because the table high water mark is not re-set after delete operations. Thus we use Shrink space command.

Step7: Use the following command to disable row movement.
ALTER TABLE AUDIT_TRAIL disable row movement;

Step8: Use the following query to find the used space and free space for SOA infra
SELECT USED.TABLESPACE_NAME, USED.USED_BYTES AS "USED SPACE(IN GB)",  FREE.FREE_BYTES AS "FREE SPACE(IN GB)"
FROM
(SELECT TABLESPACE_NAME,TO_CHAR(SUM(NVL(BYTES,0))/1024/1024/1024, '99,999,990.99')
 AS USED_BYTES FROM USER_SEGMENTS GROUP BY TABLESPACE_NAME) USED
INNER JOIN
(SELECT TABLESPACE_NAME,TO_CHAR(SUM(NVL(BYTES,0))/1024/1024/1024, '99,999,990.99')
 AS FREE_BYTES FROM  USER_FREE_SPACE GROUP BY TABLESPACE_NAME) FREE
ON (USED.TABLESPACE_NAME = FREE.TABLESPACE_NAME);


Step8: Execute again the step 1 query and can see the purging has been done successfully and soa_infra size is decreased.


After the table space of SOA infra is reduced, we must enable auto purge from SOA EM console.


Useful Undo table queries:

Use following query to find the ACTUAL UNDO SIZE [MByte], UNDO RETENTION [Sec], NEEDED UNDO SIZE [MByte].
SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
TO_NUMBER(SUBSTR(e.value,1,25)) "UNDO RETENTION [Sec]",
(TO_NUMBER(e.value)*TO_NUMBER(f.value)*g.undo_block_per_sec)/1048576 "NEEDED UNDO SIZE [MByte]"
FROM (
SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = 'UNDO'
AND c.status = 'ONLINE'
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
(
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = 'undo_retention'
AND f.name = 'db_block_size';

Use following query to find ACTUAL UNDO SIZE [MByte], UNDO RETENTION [Sec] and OPTIMAL UNDO RETENTION [Sec]
SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
to_number(SUBSTR(e.value,1,25)) "UNDO RETENTION [Sec]",
ROUND((d.undo_size / (to_number(f.value) *
g.undo_block_per_sec))) "OPTIMAL UNDO RETENTION [Sec]"
FROM (
SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = 'UNDO'
AND c.status = 'ONLINE'
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
(
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = 'undo_retention'
AND f.name = 'db_block_size';



Featured Post

11g to 12c OSB projects migration points

1. Export 11g OSB code and import in 12c Jdeveloper. Steps to import OSB project in Jdeveloper:   File⇾Import⇾Service Bus Resources⇾ Se...