Integration architecture
Real-Time Event Architecture in T24
IRIS, Kafka, and the publish-subscribe revolution — making T24 talk in real time without breaking a sweat.
There was a time when T24 integration meant one thing: polling. Every five minutes, your downstream system would ask T24 "anything new?" and T24 would say "no" and the downstream system would ask again five minutes later. This worked, in the same way that sending a letter and waiting for a reply works. It is reliable. It is predictable. It is also incredibly wasteful.
Real-time event architecture changes this. Instead of polling, T24 publishes events when something happens — a payment is posted, a customer is created, an account is opened — and downstream systems receive those events instantly.
R24 makes this not just possible but practical with IRIS (Interaction Framework) and the Session.publishMessage() API.
The polling problem
In a typical T24 bank, there are dozens of downstream systems that need to know when something changes in T24 — the data warehouse, the reconciliation system, the alerting system, the CRM, the reporting system.
In the old world, each of these systems would poll T24 at regular intervals. This polling creates load on T24. It creates network traffic. And it introduces latency — if the polling interval is five minutes, the downstream system is always five minutes behind.
The event-driven solution
Event-driven architecture flips the model. Instead of downstream systems asking T24 for data, T24 tells them when data changes.
T24 Event → IRIS → Kafka Topic → Subscriber 1 (Data Warehouse)
→ Subscriber 2 (Reconciliation)
→ Subscriber 3 (Alerting)The key components are:
- Event publisher. Something in T24 that detects a change and publishes an event. In R24, this is the
Session.publishMessage()API. - Event broker. Something that routes events to subscribers. This can be IRIS, Kafka, or a combination of both.
- Event subscribers. Downstream systems that consume events and react to them.
IRIS: The R24 event framework
IRIS — the Interaction Framework — is Temenos's modern event-publishing mechanism. It is part of the R24 stack and provides a standard way to publish and consume events.
How it works:
- A T24 routine or lifecycle hook calls
Session.publishMessage()with an event payload. - IRIS routes the event to configured subscribers.
- Subscribers receive the event via REST webhooks or Kafka topics.
What you can publish:
- Payment lifecycle events (payment created, payment authorised, payment settled)
- Record lifecycle events (record created, record updated, record authorised)
- Custom business events (customer onboarded, account closed, limit exceeded)
The R24 hooks that publish events:
PaymentLifecyclehooks —getCreditAccount(),getDebitAccount(),validateCreditParty(),getPaymentDate(),updateProduct(),updateProcessSequence(),skipMessage(),getChargeResponse(),getFileName()RecordLifecyclehooks —postUpdateRequest(),enableAutomaticAuthorisation()Deliveryhooks —validateBic(),mapTagValuesToRecord()
Kafka: The event backbone
While IRIS handles the T24-side event publishing, Kafka handles the enterprise event backbone. Kafka provides durable event storage, multiple consumers, ordered delivery, and scalability.
T24 → IRIS → Kafka Producer → Kafka Topic → Kafka Consumer → Downstream System
The real-time use cases
Real-time data warehouse
Before: Nightly batch file from T24 to data warehouse. Data is 24 hours old.
After: Real-time events from IRIS to Kafka to data warehouse. Data is seconds old.
Real-time fraud detection
Before: Fraud detection system polls T24 every 5 minutes. Fraudulent transactions have a 5-minute window.
After: IRIS publishes the transaction event. Fraud detection system receives it instantly. Fraudulent transactions are flagged in milliseconds.
Real-time reconciliation
Before: Reconciliation runs overnight. Discrepancies are discovered the next day.
After: Reconciliation happens in real time. Discrepancies are flagged immediately.
The gotchas
- Event schema management. If you change the event format, all subscribers need to be updated. Versioning is essential.
- Event ordering. If events are processed out of order, the downstream system can end up in an inconsistent state.
- Event deduplication. If the same event is published twice, the downstream system should not process it twice. Idempotent processing is essential.
- Event retention. How long do you keep events? Long enough for replay, but not so long that storage costs spiral.
The bottom line
Real-time event architecture is not a nice-to-have. It is a competitive necessity. Banks that can react to events in real time — fraud detection, reconciliation, customer notifications — have a significant advantage over banks that are still polling every five minutes.
R24's IRIS framework makes event publishing practical. Kafka makes event consumption scalable. And the combination of the two makes real-time integration achievable.
If your T24 integration strategy does not include event-driven architecture, you are building yesterday's solution for tomorrow's problems.