TAFJ operations
TAFJ Log File Locations: Where to Look When Something Goes Wrong
In TAFC, there was one log worth checking. In TAFJ, there are six — maintained by different parts of the stack, stored in different directories, and none of them alone tells you what happened. The order you check them in matters as much as what you find.
The first time most TAFC developers go looking for a log in TAFJ, they go to where it would have been in TAFC. They find either nothing, or something that is technically a log but is not the log they needed. They go looking somewhere else. This repeats. Eventually someone who has done this before tells them where to look, and they write it down in a place that will be lost within six months.
This is not because TAFJ is badly organised. It is because TAFJ runs on a layered stack — an application server, a JVM, a T24 runtime, a set of services, and a database — and each layer keeps its own records of what it was doing when things went wrong. The discipline is not finding the right log. It is checking them in an order that follows the dependency chain, so you find the first failure rather than the loudest one.
Failures in lower layers produce noise in higher ones. If you start with application-level errors, you may spend an hour chasing symptoms that were caused by something that happened before T24 finished starting up. The sequence matters.
Where the Logs Are
The paths below use standard TAFJ on JBoss/WildFly on Linux, which covers most production environments. Your specific paths depend on how TAFJ was installed — run tDiag to confirm what your environment thinks the paths are, and trust that over this table.
| Log | Default location | What it covers |
|---|---|---|
| JBoss server log | $JBOSS_HOME/standalone/log/server.log | Application server startup, deployment errors, datasource connections, JVM exceptions, thread and heap issues |
| Deployment markers | $JBOSS_HOME/standalone/deployments/ | Whether each deployed artifact (WAR, EAR, JAR) succeeded or failed — .deployed, .failed, .undeployed |
| TAFJ runtime log | $TAFJ_HOME/logs/ (as set in properties) | TAFJ-level runtime output, session failures, routine-level errors surfaced by the runtime |
| COB log | $TAFJ_HOME/logs/cob/COB_YYYYMMDD.log | COB job sequence, stage timing, batch errors, reverse replay activity |
| Service / agent logs | $TAFJ_HOME/logs/services/ | Individual service behaviour — OFS listener, MQ consumer, scheduled services |
| TemnLogger output | Set by temn.tafj.log.dir in your properties file | Routine-level logging from T24 programs — the closest equivalent to what TAFJ writes when your code uses the logger |
| JVM GC log | Set by -Xloggc in JVM startup arguments | Garbage collection frequency and duration — useful when the application is slow rather than broken |
The Temenos documentation covers most of these. It is, as is traditional, thorough about their existence and optimistic about how easily you will find them in a production environment configured three years ago by someone who has since left.
The Triage Order
The right sequence follows the dependency chain from the bottom up. The application server starts first. If it fails, nothing above it has anything useful to report. Start there.
1. JBoss server log — is the application server healthy?
The JBoss log is the application server’s running commentary on its own existence. Check it first for the same reason you check the foundation before looking at the wallpaper. Startup failures, deployment errors, datasource connection problems, and JVM-level exceptions all land here before they become anything else.
tail -200 $JBOSS_HOME/standalone/log/server.log grep -iE "error|exception|failed|warn" $JBOSS_HOME/standalone/log/server.log | tail -50
Look for the first ERROR or WARN-level entry. If the server is unhealthy, stop here and fix it — checking anything above this layer while the application server is struggling is an exercise in reading the wrong evidence.
2. Deployment markers — did everything actually deploy?
TAFJ deployments leave marker files in the deployments directory. A successful deployment creates a .deployed marker. A failed one creates a.failed marker and, usually, a corresponding log entry.
ls -la $JBOSS_HOME/standalone/deployments/ # Look for anything that is not .deployed ls $JBOSS_HOME/standalone/deployments/*.failed 2>/dev/null ls $JBOSS_HOME/standalone/deployments/*.undeployed 2>/dev/null
A failed deployment is worth checking before anything else because it produces no useful output in any other log. The application did not start, so it has nothing to report. The deployment marker has the only evidence. This is the log equivalent of checking whether the power is on.
3. TAFJ runtime log — what is T24 doing?
Once the application server looks healthy and the deployments are confirmed, the TAFJ runtime log is where T24’s own story begins. Session failures, authentication problems, routine-level runtime errors, and connection pool issues that are not caught at the JBoss layer surface here.
# Confirm the log directory from your properties file first grep "temn.tafj.log" /path/to/your/tafj.properties tail -200 $TAFJ_HOME/logs/tafj.log grep -iE "error|exception|failed" $TAFJ_HOME/logs/tafj.log | tail -30
4. COB log or service logs — what was the specific process doing?
If the issue is specific to COB, batch, or a particular service, go to the relevant log next. COB has its own log per run. Services have their own logs per service. Looking at the right one rather than the general TAFJ runtime log saves time — the specific log is more verbose about the specific failure.
# COB log for today tail -100 $TAFJ_HOME/logs/cob/COB_$(date +%Y%m%d).log # Service logs — list them to find the relevant one ls -lt $TAFJ_HOME/logs/services/ | head -10 # Watch a specific service log tail -100 $TAFJ_HOME/logs/services/<service-name>.log
5. Database-side — if the application is healthy but results are wrong
If the application server is fine, everything deployed, and TAFJ appears to be running — but the output is wrong, queries are slow, or something is not processing correctly — the database is the next layer to check. Blocking sessions, long-running queries, index fragmentation, and connection pool exhaustion all behave like T24 problems until you look at the database and discover they are database problems.
Database log locations vary by database engine — Oracle alert log, SQL Server error log, PostgreSQL log. Your DBA knows where these are. Ask them rather than finding out during an incident.
6. MQ / messaging logs — last unless the complaint is about messages
MQ is often blamed early in a TAFJ investigation. It is the most visible external dependency, it has its own admin tools, and it produces comprehensible logs. None of that makes it the likely root cause when the application server has not finished starting. Check MQ logs when the complaint is specifically about message flows — payments not moving, OFS responses not arriving, interfaces not triggering. Not before confirming the layers above are healthy.
# IBM MQ error log tail -100 /var/mqm/qmgrs/<QMGRNAME>/errors/AMQERR01.LOG # TAFJ service log for the MQ listener (more useful than the MQ log for T24-specific issues) tail -100 $TAFJ_HOME/logs/services/<mq-listener-service>.log
What Good Log Reading Actually Looks Like
Good log reading in a TAFJ incident is less about reading everything and more about finding the earliest point of failure. That usually means scanning for the first ERROR-level entry in each layer, deciding whether the real problem is above or below that point, and moving in the right direction.
The common mistake is searching for familiar keywords in whichever log is easiest to open. That works sometimes. It wastes significant time when the real failure is two layers lower and the familiar log is only reflecting its consequences. An application-level error caused by a failed deployment looks, from inside the application log, exactly like an application error. Distinguishing the two requires checking the deployment layer first.
It is also worth noting that TAFJ log verbosity is configurable. A production environment running at INFO level will show considerably less than a test environment at DEBUG. If an incident produces no useful log output at the expected layer, check the verbosity setting before concluding that the layer is healthy — it may simply be quiet on purpose.
The Comparison With TAFC
In TAFC, there was generally one log worth checking. This was not because TAFC was better-organised. It was because TAFC did not have enough moving parts to require more. One runtime, one log, one place to look. The investigation was simpler because the architecture was simpler.
TAFJ trades that simplicity for performance, scalability, and a modern runtime that behaves well under load. The cost is that the logging footprint reflects the architecture: distributed, layered, and spread across directories that nobody thinks to document until someone needs them urgently.
The practical response is to build a log location reference before you need it — confirm the paths for your specific environment using tDiag, write them down somewhere the whole team can find them, and verify them in a non-production environment before the next incident requires you to find them under pressure. The environment that is perfectly documented during a calm Tuesday is the one that gets resolved quickly on a Saturday night.
Quick Reference
The triage order in one place:
- JBoss server log — is the application server healthy?
$JBOSS_HOME/standalone/log/server.log - Deployment markers — did everything deploy?
$JBOSS_HOME/standalone/deployments/*.failed - TAFJ runtime log — what is T24 doing?
$TAFJ_HOME/logs/ - COB or service log — what was the specific process doing?
$TAFJ_HOME/logs/cob/or$TAFJ_HOME/logs/services/ - Database — is the data layer behaving correctly? (Ask your DBA)
- MQ / messaging — only when the complaint is specifically about message flows
Run tDiag to confirm the actual paths in your environment. The paths above are defaults — your installation may differ, and discovering that during an incident is a uniquely avoidable experience.
