Architecture
API and Event-Driven Architecture in T24
The market is moving away from overnight batch COB toward real-time event streaming. This article covers the Temenos Integration Framework (IRIS), Kafka integration, and what changes when your T24 system needs to emit and consume events instead of processing files at midnight — and why you should probably start caring about this before your next COB runs long.
For decades, T24 has operated on a batch-first model. Transactions are captured during the day, processed in COB overnight, and the results are available the next morning. It works. It has worked for thirty years. It is also the reason you have ever had to say "the system will update tomorrow" to a business user who wanted the information now, and watched their face cycle through confusion, frustration, and resignation.
But the industry is changing — instant payments, open banking APIs, and real-time fraud detection all require a system that can emit events as they happen, not twelve hours later. Temenos has been responding to this shift with the Temenos Integration Framework (IRIS), Kafka integration, and a growing set of REST APIs. This article explains what these technologies actually do, how they change the way T24 integrates with other systems, and what T24 teams need to know to work with them — without the vendor marketing.
The batch problem
The traditional T24 integration model looks like this: during the day, transactions are entered into T24 through OFS or the browser interface. At the end of the day, COB runs — it processes interest, generates statements, sends SWIFT messages, and updates balances. External systems get their data through files that are generated during COB and picked up by file transfer processes. If the file transfer fails, nobody notices until the next morning, and then there is a meeting.
This model has three fundamental problems:
- Latency. If a transaction happens at 10am, the downstream system does not know about it until the next COB cycle. For instant payments, that is unacceptable. For anything that happens after COB starts, it is even worse — the transaction waits until the next business day.
- Coupling. The file-based integration model means that every downstream system needs to know the file format, the schedule, and the error handling protocol. Changing the file format requires coordinating with every consumer. This is why your bank still uses a file format that was designed in 2003.
- Scalability. COB processes everything in a single batch window. As transaction volumes grow, the batch window gets longer. Banks that have outgrown their batch window are forced to split COB into multiple cycles or move processing out of COB entirely. The phrase "COB is running long" is how you know the system is reaching its limits.
Event-driven architecture solves all three problems by emitting events as transactions happen, rather than waiting for the next batch cycle. It also introduces a new set of problems, which we will get to, because nothing in T24 is ever simple.
Temenos Integration Framework (IRIS)
IRIS is Temenos's answer to the integration problem. It is a framework that sits between T24 and external systems, providing a standardised way to expose T24 data and functionality through APIs. It is not a separate product you install. It is a framework that runs within the T24 ecosystem, using the same security model, the same data model, and the same operational processes. If you know T24, you can work with IRIS — but you need to understand event-driven concepts that are different from the traditional T24 programming model.
At its core, IRIS does three things:
API exposure.
IRIS can expose T24 business operations as REST APIs. Instead of sending an OFS string and parsing the response — a process that has been described as "reading tea leaves but with more semicolons" — an external system can call POST /api/customer with a JSON payload and get a JSON response. IRIS handles the translation between the REST API and the underlying T24 operation, whether that is an OFS call, a routine invocation, or a database query. The external system does not need to know anything about T24. It just sends JSON and receives JSON. Revolutionary.
Event publishing.
IRIS can publish events when certain business events occur in T24 — a payment is processed, a customer is created, an account balance changes. These events are published to a message broker (typically Kafka or IBM MQ) and consumed by downstream systems. The downstream systems do not need to know anything about T24. They just need to subscribe to the event topic and handle the event payload. This is the part that replaces the file-based integration model, and it is the part that requires the most careful design.
Orchestration.
IRIS can orchestrate multi-step business processes that span multiple systems. For example, a customer onboarding process might create the customer in T24, send a notification through the CRM system, and trigger a credit check in a third-party system — all coordinated through IRIS without any custom integration code. In theory, this eliminates the need for the custom jBC routines that have been holding your bank together since 2010. In practice, you will still need some custom code, but less of it.
Kafka — the event backbone
Kafka has become the standard event backbone for T24 integrations. It replaces (or supplements) IBM MQ as the messaging layer, providing higher throughput, better scalability, and a pub-sub model that fits event-driven architecture naturally. It also has a learning curve that has been described as "moderately steep" by people who are being polite.
In a typical T24 + Kafka deployment:
- T24 publishes events to Kafka topics through IRIS or through custom event publishing logic in jBC routines.
- Downstream systems subscribe to the topics they care about. The payments system subscribes to the payment topic. The reporting system subscribes to the transaction topic. The fraud detection system subscribes to both. Nobody subscribes to the topic nobody told them about, which is how you discover that your event schema documentation is incomplete.
- Each consumer processes events at its own pace. If the fraud detection system is down, the events remain in the Kafka topic and are replayed when the system comes back up. This is great for resilience. It is less great when the fraud detection system comes back up and processes three days' worth of events in five minutes, triggering alerts for every one of them.
- Events can be replayed if needed. If a downstream system has a bug that causes it to process events incorrectly, you can reset the consumer offset and reprocess the events from a specific point in time. This is the Kafka equivalent of a time machine, and it is as useful as it sounds.
The shift from IBM MQ to Kafka is not just a technology change. It is an architectural change. MQ is a point-to-point messaging system — one sender, one receiver, guaranteed delivery. Kafka is a pub-sub system — one sender, many receivers, replayable events. The two models coexist in most banks, but the trend is clearly toward Kafka for new integrations. The IBM MQ administrators are not happy about this, but they are adapting.
What this means for COB
If events are published in real time, what happens to COB? The short answer is that COB does not go away, but its role changes. The long answer is more complicated, and it involves a conversation with your COB team that you should probably have soon.
In an event-driven architecture, COB is no longer the primary mechanism for moving data between systems. Instead, COB handles the things that only a batch process can do — interest calculation, fee assessment, regulatory reporting, and end-of-day position reconciliation. The real-time event stream handles everything else. This means that COB becomes smaller and more focused. Instead of processing every transaction and generating every report, COB processes only the things that require batch-level computation. The transaction data is already in the downstream systems, delivered through events as they happened.
For T24 teams, this is a significant shift. The COB schedule that has been the centre of your operational model for years becomes less critical. The event stream becomes the new centre. Monitoring COB completion is still important, but monitoring event latency and consumer health becomes equally important. You will need new dashboards. You will need new alerts. You will need to explain to your manager why you are monitoring something called "consumer lag" and why it matters.
What T24 teams need to learn
Moving to an event-driven architecture requires new skills, even for experienced T24 professionals. Some of these skills are technical. Some are conceptual. All of them are learnable, but none of them are obvious if you have spent the last ten years thinking in terms of batch schedules and file formats.
Event design.
In the traditional model, you design file formats. In the event-driven model, you design event schemas. What data does the event carry? What is the event key? What happens if the schema changes? These are design decisions that affect every consumer of the event stream, and they need to be made carefully. Changing an event schema after it is in production is like changing the layout of a form that has already been printed and distributed — technically possible, but painful for everyone involved.
Idempotency.
Events can be delivered more than once. If a consumer crashes after processing an event but before committing the offset, the event will be redelivered. The consumer needs to handle this gracefully — processing the same event twice should produce the same result as processing it once. This is a new concept for T24 developers who are used to transactional processing where a commit guarantees exactly-once delivery. The phrase "at-least-once delivery" will become part of your vocabulary, and you will learn to design for it.
Error handling.
In the batch model, a failed transaction goes to a repair queue and a human fixes it. In the event-driven model, a failed event needs to be handled programmatically — retried, sent to a dead-letter queue, or escalated to an alert. The error handling logic needs to be built into the consumer, not handled by a human looking at a screen. This is better in theory. In practice, you will still have a dead-letter queue that someone needs to check manually, because some errors cannot be handled programmatically. The dead-letter queue is the new repair queue. It just has a different name.
Monitoring.
Monitoring a batch process is straightforward — did it complete? How long did it take? Monitoring an event stream is different — what is the event latency? How many events are in the backlog? Are any consumers falling behind? These metrics require new monitoring tools and new operational procedures. You will also need to explain to your operations team why "consumer lag" is not a performance issue but a metric they need to watch, and why a growing backlog of events is a problem even if the system is not showing any errors.
The bottom line
Event-driven architecture is not a theoretical future for T24. It is happening now. Banks are implementing IRIS, deploying Kafka, and moving away from batch-first integration. The technology is mature, the patterns are well understood, and the benefits — lower latency, better scalability, looser coupling — are real.
For T24 teams, the shift requires learning new concepts and new tools. But the fundamentals are the same — you still need to understand the T24 data model, the business processes, and the operational requirements. The difference is that instead of designing file formats and batch schedules, you are designing event schemas and consumer logic. It is a different way of working, but it is not a completely different job. And the good news is that if you understand COB, you already understand the business events that need to be published. You just need to learn how to publish them in real time instead of waiting until midnight.