
Project Hertha: Are We Finally Looking Beyond Our Own Walls to Fight Financial Crime?
The Big Idea: A Helping Hand, Not a Replacement
As a compliance professional, you know the daily grind. The alerts pile up, the false positives are a constant drain on resources, and you’re always haunted by the feeling that you’re only seeing one small piece of a much larger, more sinister puzzle. Criminals don’t respect institutional boundaries; their networks are designed to slice across multiple banks, making it nearly impossible for any single one of us to connect the dots.
So, when the BIS Innovation Hub and the Bank of England drop a report like Project Hertha, we sit up and take notice. This isn't just another academic paper. It's a hands-on exploration into a concept that could fundamentally enhance how we detect illicit activity: leveraging the network-wide view of real-time payment systems.
The project’s core question is simple but profound: What if the payment system itself could help us spot the complex criminal activity that’s invisible from our siloed perspective?
A Collaborative Model
Let's be clear from the outset. Project Hertha doesn't propose stripping banks of their financial crime-fighting responsibilities. Instead, it tests a collaborative model where the payment system acts as a supplementary intelligence source.
Imagine an AI-powered analytics engine sitting at the heart of the payment system. It sees the flow of funds across all participants and can identify network-wide patterns that suggest coordinated criminal schemes. It then provides risk indicators on suspicious accounts back to the individual banks. The bank, armed with this new network-level insight, can then fuse it with its own deep customer knowledge to make a much more informed decision.
This is the concept Hertha put to the test:
This model isn't about blindly trusting an external score; it's about adding a powerful new data point to our existing arsenal.
How They Modelled the Underworld
To test this without touching a single piece of real customer data, the project team built an incredibly complex synthetic data set. We’re talking about 1.8 million artificial bank accounts and 308 million transactions, all generated by an AI model trained to mimic realistic financial behaviours.
Crucially, they didn't just simulate legitimate activity. They embedded 2,000 money laundering schemes based on 10 common typologies, drawing on expert input and published reports. This gave them a realistic, complex, and safe environment to test their models against.
For any seasoned compliance officer, these typologies—from "gather-scatter" to complex cycles—are all too familiar. By modelling them realistically, the project created a credible benchmark for its findings.
The Results: Where Network Analytics Truly Shines
So, did it work? The findings are both humbling and incredibly promising.
First, the humbling part: working in isolation, the payment system's model was slightly less effective than the banks' models, identifying 39% of illicit accounts compared to the banks' 44%. This makes sense. Banks have a wealth of customer data that a payment system operator simply doesn't. This isn't a silver bullet.
But here’s where it gets exciting. The real value isn't in replacement, but in collaboration.
When banks actively incorporated the payment system's insights into their own models, they identified 12% more illicit accounts than they would have on their own.
The value-add becomes even more pronounced when you dig into the details:
- Spotting the Unknown: For identifying new and previously unseen criminal behaviours, this collaborative approach provided a 26% improvement. In a world where criminals constantly evolve their tactics, this is a massive advantage.
- Tackling Complexity: The system proved most effective at identifying complex schemes involving many accounts across different banks. For some of these typologies, the project found it could double detection accuracy. This directly targets the cross-institutional blindness that plagues us today.
Making It Real: The All-Important Feedback Loop
A model is only as good as the data it learns from. The project's results scream one thing loud and clear: for this to work, you need high-quality, labelled training data.
Unsupervised models, which look for anomalies without past examples, performed poorly, flagging huge numbers of false positives. Supervised models, trained on confirmed past cases, were far more effective.
This underscores the absolute necessity of a robust feedback loop between the payment system and the participating banks. The system flags an account, the bank investigates, and critically, the bank reports the outcome back to the system. This creates a virtuous cycle of continuous improvement, allowing the model to get smarter and more accurate over time.
This feedback loop is the engine that would power such a system in the real world. It ensures the AI is constantly learning from the expert judgment of human investigators. Furthermore, the project highlights the need for "Explainable AI." A compliance officer needs more than a risk score; they need to understand the "why"—the variables and patterns that triggered the alert—to conduct a meaningful investigation.
The Bigger Picture: A New Tech Stack for Financial Integrity
Project Hertha doesn't exist in a vacuum. The BIS Innovation Hub positions it as a key component in a broader "financial integrity technology stack." This vision combines multiple initiatives to create a more holistic defence against financial crime.
In this vision, you have:
- Project Mandala: Using technology for automated, programmable compliance checks before a transaction even happens.
- Project Hertha: Providing real-time transaction monitoring within the payment system itself.
- Project Aurora: Enabling collaborative analytics and information sharing across institutions to investigate the most complex networks.
Together, they represent a future where compliance is more proactive, collaborative, and data-driven.
Final Thoughts
Project Hertha is more than an interesting experiment. It’s a practical demonstration that looking beyond our own institutional walls is not only possible but essential. The 12% overall improvement in detection is a material gain that could translate into billions in intercepted illicit funds and a more secure financial system for everyone.
This isn't about replacing the skill and judgment of the compliance officer. It's about augmenting it. It’s about giving us a new lens to see the threats we currently miss and allowing us to focus our expertise on the highest-risk activity. The road to implementing such a system is complex, involving significant legal, regulatory, and practical hurdles. But Hertha has given us a clear, data-backed glimpse of what the future of financial crime-fighting could look like. The question for us now is, are we ready to build it?