Working...screenshots pending...
In enterprise integrations, not all errors are the same.
Some errors are due to business rules, while others are technical/system failures.
If we don’t categorize them properly, monitoring tools like Datadog or ServiceNow will show noisy, unclear alerts.
This blog explains how to design a clean and scalable error classification framework in OIC.
🎯 Why Error Categorization Matters
When invoking any backend service (REST/SOAP API, ERP, DB, File, etc.), failures can occur due to:
❌ Missing mandatory fields
❌ Validation failure
❌ File not found
❌ Endpoint down
❌ Timeout / network issue
❌ Authentication failure
If we treat everything as “technical error”, business users get confused.
If we treat everything as “business error”, support teams struggle.
So we need a structured approach.
🟢 Step 1: Define Error Types
1️⃣ Business Errors
These occur due to functional validation or business rule violations.
Examples:
Mandatory field missing
Invalid data format
Duplicate record
File not found (expected business file missing)
Validation failure from downstream API
👉 These should be thrown as custom business faults using Throw New Fault in OIC.
Example:
Error Code: MANDATORY_FIELD_ERROR
Error Type: BUSINESS
Severity: MEDIUM
2️⃣ Technical Errors
These occur due to system/infrastructure failures.
Examples:
500 Internal Server Error
Connection timeout
Authentication failure
Network issue
Service unavailable
👉 These are generally caught in Global Fault Handler and categorized as technical errors.
Example:
Error Code: ENDPOINT_TIMEOUT
Error Type: TECHNICAL
Severity: HIGH
🛠 Step 2: Throw Business Errors in OIC
When invoking a business API:
Add a Scope
Inside the scope → Add service invocation
In the fault handler:
Check response error
Use Throw New Fault
Set custom error key (like FILE_NOT_FOUND_ERROR)
This clearly separates business failure from system failure.
🗂 Step 3: Use Lookup for Centralized Error Mapping
Create a Lookup Table in OIC:
Error Key Error Type Reason Severity errorcode if needed
MANDATORY_FIELD_ERROR BUSINESS Mandatory field missing MEDIUM
FILE_NOT_FOUND_ERROR BUSINESS Expected file not available HIGH
ENDPOINT_TIMEOUT TECHNICAL Service timeout HIGH
DEFAULT_TECH_ERROR TECHNICAL Unknown system failure CRITICAL
🔎 Step 4: Match Thrown Error with Lookup
When sending error details to monitoring tools:
Capture thrown error key
Call Lookup
If match found:
Pull error type
Pull severity
Pull reason
If no match found:
Default to DEFAULT_TECH_ERROR
This ensures:
Standardized error reporting
Controlled severity levels
Clean dashboards in Datadog / ServiceNow
📡 Step 5: Send Structured Error to Monitoring Tools
Payload Example:
{
"integrationName": "CustomerSync",
"errorCode": "MANDATORY_FIELD_ERROR",
"errorType": "BUSINESS",
"severity": "MEDIUM",
"reason": "Customer email is missing",
"timestamp": "2026-02-17T10:30:00"
}
This helps:
Business team → Understand validation issues
Support team → Identify system failures quickly
Monitoring tools → Trigger correct alerts
🏗 Recommended Architecture Pattern
✔ Use Throw New Fault for business errors
✔ Use Global Fault Handler for technical errors
✔ Maintain centralized Error Lookup Table
✔ Always send structured payload to monitoring tools
✔ Keep default fallback for unknown errors
🚀 Benefits of This Approach
- Clear separation of Business vs Technical issues
- Cleaner observability in Datadog / ServiceNow
- Standardized error governance
- Reusable across all integrations
- Easy to maintain and scale
✅ Final Thought
In OIC, error handling should not be reactive — it should be designed intentionally.
By combining:
Scoped fault handling
Custom business faults
Centralized lookup mapping
Structured monitoring payload
You build a robust enterprise-grade integration framework.
No comments:
Post a Comment