A routing rule can be executed either in parallel or sequentially. To specify an execution type for a routing rule, select the Sequential or Parallel execution type in the Routing Rules section.
Basic Principles of Sequential Routing Rules:
Let's say you have 3 mediator services deployed to your SOA server and each of these has a single parallel routing rule. When the mediator service received a message,
The mediator service engine has one thread called Locker thread. The locker thread would surface the message metadata from the MEDIATOR_DEFERRED_MESSAGE table into an internal memory queue. The locker thread does this in its own transaction. The MEDIATOR_DEFERRED_MESSAGE table also has a state listed below:
0 – READY
1 - LOCKED
2 - COMPLETED SUCCESSFULLY
3 - FAULTED
Hence, it is important to understand how the locker thread works behind the scene, as this would affect your design decision:
The locker thread has an algorithm to process the message stored by the dispatcher, and there is only 1 locker thread per managed server. After the dispatcher has stored the message data in the MEDIATOR_DEFERRED_MESSAGE table, the locker thread will lock message(s) that has a state=“0”, the number of messages to be locked by the locker thread will be dependent on the Parallel Maximum Rows Retrieved parameter that you have set in the EM->SOA-INFRA->SOA Administration->Mediator Properties. The locker thread will cycle through 1 mediator component at a time and check if there are any requests to process from the internal queue. It will process the message(s) by changing its state to “1” for that component for processing and then sleep for the configured interval defined in the "parallel locker thread sleep" setting before moving on to the next mediator component. If it finds no message to process, it would move on to the next, and the next, until it loops back to the first composite, where it would then process its next request.
For example: If there are 3 mediator components m1, m2, m3 with priority 1, 2, 3 then the algorithm goes as lock m1 -> sleep -> m2 -> sleep -> m3 -> sleep -> m2 -> sleep -> m3 -> sleep -> m3, in 6 iterations of locker thread, m1 messages are locked once, m2 messages are locked twice and m3 messages are locked 3 times as per the priority. All this happens in a single Locker cycle. Only after the locker thread locks the retrieved messages will they be queued for the worker threads to process the messages. So if you have many mediator components (e.g. 50 mediator components) with parallel routing rule, it will take a considerable amount of time for the locker thread to lock the message to complete one locker cycle. If you have mediator components with lower priority, it will take a longer time for the locker thread to lock the message for the low priority mediator component. The locker cycle will be reset if you undeploy or deploy a new mediator component with parallel routing rules, this is to ensure the mediator component with higher priority will be processed in the next cycle. You will be able to observe these behaviors when you set the logging level to the Trace:32 FINEST level in Oracle Enterprise Manager Fusion Middleware Control.
oracle.soa.mediator.common
oracle.soa.mediator.service
oracle.soa.mediator.dispatch
oracle.soa.mediator.serviceEngine
After the locker thread locked the message, the worker thread will retrieve the message from the in-memory queue and process the message. The number of worker threads can be tuned by changing the Parallel Worker Threads property in EM->SOA-INFRA->SOA Administration->Mediator Properties. Once the message is processed, the worker thread will change the state of the message to either “2” –Completed successfully or “3” – Faulted.
The engine was designed in order to prevent starving of threads caused by load on a single composite. What the engine wants to avoid is that, if you have a Mediator service that has received hundreds of thousands of requests and another one having received two requests, each service is given a fair amount of time to be serviced, otherwise the two requests may have to wait for hours to execute. Thus, the three settings to consider in asynchronous Mediator services are the following:
The Parallel Locker Thread Sleep setting: This is defined at the Mediator Service Engine level
The number of threads allocated to the Mediator Service Engine: This is defined by the Parallel Worker Threads parameter
The Priority property: Which is set at design time and applicable only to parallel routing rules
Another important point to note is when a mediator service engine is started, it registers itself in the database table called MEDIATOR_CONTAINERID_LEASE and gets a container ID. This is important because when the row is inserted into the MEDIATOR_DEFERRED_MESSAGE table, it round-robins the deferred message to one of its containers, the engine will then assigns the ID of the container that should process the message.
List of design considerations designing your composite using mediator parallel routing rules:
Basic Principles of Sequential Routing Rules:
- Mediator evaluates routings and performs the resulting actions sequentially. Sequential routings are evaluated in the same thread and transaction as the caller.
- Mediator always enlists itself into the global transaction propagated through the thread that is processing the incoming message. For example, if an inbound JCA adapter invokes a Mediator, the Mediator enlists itself with the transaction that the JCA adapter has initiated.
- Mediator propagates the transaction through the same thread as the target components while executing the sequential routing rules.
- Mediator never commits or rolls back transactions propagated by external entities.
- Mediator manages the transaction only if the thread-invoking Mediator does not already have an active transaction. For example, if Mediator is invoked from inbound SOAP services, Mediator starts a transaction and commits or rolls back the transaction depending on success and failure.
- Mediator queues and evaluates routings in parallel in different threads.
- The messages of each Mediator service component are retrieved in a weighted, round-robin fashion to ensure that all Mediator service components receive parallel processing cycles. This is true even if one or more Mediator service components produce a higher number of messages compared to other components. The weight used is the message priority set when designing a Mediator service component. Higher numbers of parallel processing cycles are allocated to the components that have higher message priority.
By using parallel routing rules, services can be designed to be truly asynchronous. However, the service engine executes these requests in a rather unique fashion.
Let's say you have 3 mediator services deployed to your SOA server and each of these has a single parallel routing rule. When the mediator service received a message,
- The message would be inserted in Mediator store by the dispatcher.
- Its message metadata would be written to the MEDIATOR_DEFERRED_MESSAGE table.
- The payload goes into MEDIATOR_PAYLOAD table.
The mediator service engine has one thread called Locker thread. The locker thread would surface the message metadata from the MEDIATOR_DEFERRED_MESSAGE table into an internal memory queue. The locker thread does this in its own transaction. The MEDIATOR_DEFERRED_MESSAGE table also has a state listed below:
0 – READY
1 - LOCKED
2 - COMPLETED SUCCESSFULLY
3 - FAULTED
Hence, it is important to understand how the locker thread works behind the scene, as this would affect your design decision:
The locker thread has an algorithm to process the message stored by the dispatcher, and there is only 1 locker thread per managed server. After the dispatcher has stored the message data in the MEDIATOR_DEFERRED_MESSAGE table, the locker thread will lock message(s) that has a state=“0”, the number of messages to be locked by the locker thread will be dependent on the Parallel Maximum Rows Retrieved parameter that you have set in the EM->SOA-INFRA->SOA Administration->Mediator Properties. The locker thread will cycle through 1 mediator component at a time and check if there are any requests to process from the internal queue. It will process the message(s) by changing its state to “1” for that component for processing and then sleep for the configured interval defined in the "parallel locker thread sleep" setting before moving on to the next mediator component. If it finds no message to process, it would move on to the next, and the next, until it loops back to the first composite, where it would then process its next request.
For example: If there are 3 mediator components m1, m2, m3 with priority 1, 2, 3 then the algorithm goes as lock m1 -> sleep -> m2 -> sleep -> m3 -> sleep -> m2 -> sleep -> m3 -> sleep -> m3, in 6 iterations of locker thread, m1 messages are locked once, m2 messages are locked twice and m3 messages are locked 3 times as per the priority. All this happens in a single Locker cycle. Only after the locker thread locks the retrieved messages will they be queued for the worker threads to process the messages. So if you have many mediator components (e.g. 50 mediator components) with parallel routing rule, it will take a considerable amount of time for the locker thread to lock the message to complete one locker cycle. If you have mediator components with lower priority, it will take a longer time for the locker thread to lock the message for the low priority mediator component. The locker cycle will be reset if you undeploy or deploy a new mediator component with parallel routing rules, this is to ensure the mediator component with higher priority will be processed in the next cycle. You will be able to observe these behaviors when you set the logging level to the Trace:32 FINEST level in Oracle Enterprise Manager Fusion Middleware Control.
oracle.soa.mediator.common
oracle.soa.mediator.service
oracle.soa.mediator.dispatch
oracle.soa.mediator.serviceEngine
After the locker thread locked the message, the worker thread will retrieve the message from the in-memory queue and process the message. The number of worker threads can be tuned by changing the Parallel Worker Threads property in EM->SOA-INFRA->SOA Administration->Mediator Properties. Once the message is processed, the worker thread will change the state of the message to either “2” –Completed successfully or “3” – Faulted.
The engine was designed in order to prevent starving of threads caused by load on a single composite. What the engine wants to avoid is that, if you have a Mediator service that has received hundreds of thousands of requests and another one having received two requests, each service is given a fair amount of time to be serviced, otherwise the two requests may have to wait for hours to execute. Thus, the three settings to consider in asynchronous Mediator services are the following:
The Parallel Locker Thread Sleep setting: This is defined at the Mediator Service Engine level
The number of threads allocated to the Mediator Service Engine: This is defined by the Parallel Worker Threads parameter
The Priority property: Which is set at design time and applicable only to parallel routing rules
Another important point to note is when a mediator service engine is started, it registers itself in the database table called MEDIATOR_CONTAINERID_LEASE and gets a container ID. This is important because when the row is inserted into the MEDIATOR_DEFERRED_MESSAGE table, it round-robins the deferred message to one of its containers, the engine will then assigns the ID of the container that should process the message.
List of design considerations designing your composite using mediator parallel routing rules:
- The priority property is only applicable to the parallel routing rules, so you need to consider the mediator component priority base on your business requirement.
- The locker thread will cycle through all mediator components with parallel routing rules deployed in your environment regardless of whether it has been retired or shutdown.
- Use sequential routing rules if latency is important and that you are expecting the message to be processed without delay.
- If you have well over 100 parallel mediator components deployed in your environment, the time to complete the locker cycle grew exponentially and could not be further tuned because there is only 1 locker thread and the lowest parallel locker thread sleep time that you can set is 1 second.
- If you have a mediator component that contain both sequential and parallel routing rules, sequential routing rules will be executed before parallel routing rules.
- Fault policy is only applicable to parallel routing rules only. For sequential routing rules, the fault goes back to the caller and it is the responsibility of the caller to handle the fault. If the caller is an adapter, then you can define rejection handlers on the inbound adapter to take care of the errored out messages, that is, the rejected messages.
No comments:
Post a Comment