Challenge - Transaction Monitoring

High alert volumes but low signal clarity.

Transaction monitoring generates a high volume of alerts, but institutions struggle to distinguish meaningful risk signals from background noise.

As a result:

  • Investigative capacity is consumed by low-value alerts

  • Genuine risks are harder to identify and prioritise

  • Effectiveness is reduced despite high levels of activity

The challenge is not the presence of alerts, but the difficulty in extracting signal from noise.

The Underlying Issue

Transaction monitoring is not failing because financial crime is too complex to detect.

It is failing because it is designed to generate alerts, rather than to produce and assess meaningful risk insight.

The Nature of the Problem

  • Alerts are rule-generated demand, not risk-driven demand

  • Signal-to-noise ratio is inherently low

  • Capacity is consumed by non-risk work

  • Flow is unmanaged (backlogs, delays)

  • Feedback loops are weak

Without managing these dynamics, high volumes and low effectiveness are inevitable.

Why this is difficult

  • Fragmented client and transaction data

  • Lack of client/network context

  • Generic or poorly calibrated scenarios

  • Growing transaction volumes

  • Regulatory expectation of “effectiveness”

What this leads to

  • High false positives

  • Investigation backlogs

  • Inconsistent quality

  • Missed or delayed detection

  • Rising cost without improved outcomes

A different approach

Transaction monitoring needs to be a risk insight capability rather than an alerting system

  • Alerts → Signals

  • Transactions → Client behaviour

  • Volume → Prioritisation

  • Activity → Effectiveness

How it works

Improving transaction monitoring is not about refining alerts.
It is about redesigning how signals are created, interpreted, and acted upon.

This requires a small number of structural changes:


1. Start with Client Context

Transactions only have meaning when viewed in the context of:

  • The client

  • Their expected behaviour

  • Their relationships and network

  • The underlying business arrangement

Monitoring must be anchored in a current, coherent client view (CLM / ERR).


2. Generate Signals, Not Alerts

Rules should not aim to detect “breaches”.

They should aim to identify patterns that deviate from expected behaviour.

  • Combine related activity

  • Incorporate client context

  • Reduce isolated, low-value triggers

The objective is fewer, more meaningful signals.


3. Prioritise Before Investigation

Not every signal should become work.

Introduce a layer that:

  • Assesses relevance

  • Groups related signals

  • Ranks by potential risk

Investigators should receive prioritised, contextualised cases, not raw alerts.


4. Design and Manage Flow

Monitoring is a continuous flow of signals, not a queue of tasks.

  • Control inflow through scenario design

  • Limit work-in-progress

  • Align capacity to prioritised demand

This stabilises throughput and improves timeliness.


5. Learn and Adapt Continuously

The system must improve through use.

  • Investigation outcomes refine scenarios

  • False positives are analysed and reduced

  • Data gaps are identified and corrected

This creates a closed feedback loop between execution and design.

What This Achieves

  • Lower alert volumes, higher signal quality

  • Reduced investigative burden

  • Faster identification of genuine risk

  • More consistent and defensible outcomes

Effective Transaction Monitoring

Effective transaction monitoring is achieved by designing a system that:

  • Understands the client

  • Generates meaningful signals

  • Prioritises what matters

  • Manages operational flow

  • Continuously improves through feedback

Not by optimising how alerts are processed.