Integration architecture

Legacy Host Integration with T24

Mainframes, AS/400, and the systems that were supposed to be replaced five years ago. Here is how T24 talks to them.

Every T24 bank has one. The mainframe. The AS/400. The legacy core banking system that was supposed to be decommissioned when T24 went live but is still running because it handles something that T24 does not — or because nobody has the courage to turn it off.

Integrating T24 with legacy host systems is a reality of life for every T24 integration architect. The legacy system is not going away. It has data that T24 needs. It has processes that T24 feeds into. And it speaks protocols that were designed before REST was a twinkle in anyone's eye.

The legacy landscape

Mainframe (IBM z/OS)

The mainframe is the granddaddy of legacy systems. It runs COBOL programs, uses VSAM or DB2 for storage, and communicates through CICS transactions or batch file transfers.

Typical integration points: Customer master data, general ledger, regulatory reporting, treasury systems.

AS/400 (IBM iSeries)

The AS/400 is the middle child of legacy systems. Less powerful than a mainframe, but more common in mid-sized banks. It runs RPG programs, uses DB2/400 for storage, and communicates through data queues or file transfers.

Typical integration points: Branch systems, loan systems, deposit systems.

Unix/Linux legacy systems

Some banks have legacy systems running on Unix or Linux that predate T24. These might be written in C, COBOL, or a proprietary language.

Typical integration points: Payment systems, reporting systems, interface engines.

The integration patterns

Pattern 1: File-based integration

This is the most common pattern. T24 and the legacy system exchange files through a shared file system or an FTP transfer.

T24 → File Export → FTP/SFTP → Legacy System → File Import

When to use it: When the legacy system supports file-based import/export (most do) and the data volume is high.

The gotcha: File-based integration introduces latency. If the file transfer runs hourly, the data is up to an hour old.

Pattern 2: MQ-based integration

If the legacy system supports MQ (IBM MQ), this is the preferred pattern. MQ provides guaranteed delivery, asynchronous processing, and load leveling.

T24 → MQ Queue → Legacy System MQ Listener → Legacy System

When to use it: When the legacy system supports MQ and you need reliable, asynchronous communication.

Pattern 3: Screen scraping (the pattern we do not talk about)

Screen scraping is the integration pattern that nobody admits to using but everyone has used at least once. It involves a program that logs into the legacy system's terminal interface, navigates through screens, and extracts or enters data.

When to use it: Never. If you are screen scraping, you have failed at integration architecture. But sometimes it is the only option.

The gotcha: Screen scraping is fragile. Any change to the legacy system's screen layout breaks the integration.

Pattern 4: Database-level integration

Database-level integration involves reading from or writing to the legacy system's database directly. This is the nuclear option — powerful, but dangerous.

When to use it: When you need real-time access to legacy system data and no other integration method is available.

The gotcha: Direct database access bypasses the legacy system's business logic. This is a last resort, not a first choice.

The R24 perspective

R24 does not change the fundamental challenge of legacy host integration, but it does provide some tools that make it easier:

  • OFS API hooks (IN.MSG.RTN, OUT.MSG.RTN,MSG.PRE.RTN, MSG.POST.RTN) allow you to transform data between T24 and legacy formats without modifying core T24 code.
  • IRIS event publishing (Session.publishMessage()) allows T24 to publish events that a legacy integration layer can consume and translate for the legacy system.
  • TEF extensibility (PaymentLifecycle,RecordLifecycle, Delivery hooks) provides Java-based hooks that can call legacy system APIs or services.

The practical reality

  1. The legacy system is not going away. Accept this. Plan for it. Design your integration architecture around it.
  2. File-based integration is the most reliable. It is not the fastest, but it is the most reliable. Files do not time out. Files do not get lost in network failures. Files can be audited and replayed.
  3. MQ is the best option for real-time. If the legacy system supports MQ, use it. It provides guaranteed delivery and asynchronous processing.
  4. Avoid screen scraping. It is fragile, slow, and impossible to maintain.
  5. Database-level integration is a last resort. It bypasses business logic and creates maintenance nightmares.

The bottom line

Legacy host integration is not glamorous. It does not make the front page of architecture diagrams. But it is the reality of every T24 bank.

The best integration architects are the ones who understand the legacy system's capabilities and limitations, and design integration patterns that work within those constraints. They do not try to force the legacy system to be something it is not. They work with what they have.

And sometimes, what they have is a 30-year-old mainframe that speaks a protocol that was designed before the internet existed. And that is okay. File-based integration with a well-designed file format and a reliable transfer mechanism is still better than a fragile, real-time integration that breaks every time the legacy system is patched.