Collusion Defeats Controls & Intelligence Defeats Collusion
Why Data, AI, and Human Skill Must Work Together to Fight the Fraud That Lives Inside Your Institution
A fraud case that recently rocked the Indian banking sector tells a story that should unsettle every risk professional alive. At a single branch, certain employees working in collusion with external parties allegedly forged physical cheques and processed unauthorized transactions over an extended period, siphoning hundreds of crores from government-linked accounts.
The fraud only surfaced when a government department tried to close its account, and the numbers simply didn’t add up.
By then, it was too late.
Suspensions
Forensic audits
Regulatory scrutiny
A stock in freefall
The institution’s own leadership acknowledged it plainly:
The entire maker-checker-authorizer system was in place. And yet, a group of people came together and made the fraud happen anyway.
That admission should stop every risk professional, compliance officer, and banking executive cold.
Not A Technology Failure.
It is a control failure engineered by humans who understood the controls a tad too much.
Collusive fraud is the most dangerous species of financial crime precisely because it defeats conventional safeguards from the inside. When the maker, checker, and authorizer are all compromised, or when even one trusted insider manipulates the chain, rule-based systems, periodic audits, and manual reconciliation become theatre. The fraudsters don’t break the system. They operate it.
This is not unique to one institution or one branch. Insider threat and internal collusion are consistently among the top drivers of fraud losses globally, across banking, insurance, supply chain, and public finance. The Association of Certified Fraud Examiners estimates that organizations lose 5% of annual revenues to fraud, with a significant share involving internal actors.
The uncomfortable truth: most of these frauds leave data trails. They just aren’t being read in real time.
Present or Future?
Data and AI are not the future of fraud risk management. They are the present, and most institutions are still catching up.
Here is what modern AI-powered fraud and risk systems can do that legacy controls simply cannot, and how they must be deployed with intent, not just installed with hope.
1. Behavioral Anomaly Detection: Move Beyond Rules, Into Patterns
Rule engines are reactive. They catch what you already know to look for. AI learns what normal looks like across every user, account, branch, and transaction relationship, and raises an alert the moment something diverges, even if no rule has been broken. An employee processing large-value instructions at unusual hours. A branch where reconciliation exceptions are always cleared by the same two people. A cluster of approvals that moves just below every threshold. These signals are invisible to policy manuals. They are loud to a well-trained model.
The imperative: deploy behavioral AI not as a monitoring tool sitting in a back office, but as a live risk layer embedded into every approval workflow.
2. Graph Intelligence to Surface Collusion: See the Network, Not Just the Transaction
Collusion is, at its core, a network problem. And networks have signatures. Graph-based AI can map every relationship between employees, accounts, beneficiaries, approvers, and external entities, and detect when those relationships form patterns associated with coordinated fraud: shared access clusters, unusual co-authorization chains, money flows that loop back through the same nodes. What a human auditor sampling 5% of transactions cannot see, a graph neural network analyzing 100% of the transaction universe sees in real time.
The imperative: invest in graph-based analytics infrastructure and connect it to your identity, access, and payments data. Fraud hides in relationships. So must your detection capability.
3. Continuous Surveillance: End the Audit Window, Permanently
The gap between audits is where fraud lives. Monthly, quarterly, and annual reviews are retrospective by design, and fraudsters with institutional knowledge know exactly how to operate within that window. AI enables continuous, always-on monitoring of every transaction, every access log, every exception, every override, with risk scores updated in real time and anomalies escalated immediately.
The imperative: retire the mindset that surveillance is something you do periodically. Build infrastructure where the system watches always, so your people can investigate always.
4. Unsupervised Learning for Fraud You Haven’t Seen Before
The fraud method in this case, forged physical cheques, was acknowledged by the institution’s own leadership as the oldest fraud in banking. Yet it went undetected for a prolonged period. Supervised models only catch what they have been trained on. Unsupervised models surface statistical outliers regardless of fraud typology, because they don’t need to know what the fraud is, only that something is statistically out of place.
The imperative: complement your supervised fraud models with unsupervised anomaly detection running in parallel. Unknown fraud patterns are only unknown until someone looks at the right signal.
Tech Is Just The Beginning
Technology alone is never enough. And this is the part of the conversation the industry consistently avoids.
Deploying AI in fraud risk is not like installing software. It demands a profound shift in how your people think, what they know, and how they engage with data every single day.
Risk and Compliance Teams Must Become Data-Literate, Not Optionally, But Urgently
The era of the fraud investigator who works purely from instinct, experience, and sampled transaction reports is ending. Today’s risk professional must be able to read a model output, interrogate an anomaly score, understand why an alert was raised, and make a judgment call that combines human context with machine signal. This is not a data science skill. It is a risk professional skill, and it needs to be built deliberately through structured learning, cross-functional exposure, and hands-on engagement with the tools.
Learn SQL. Understand what a confusion matrix tells you about your fraud model. Know the difference between a false positive rate and a precision score, and why both matter in your context. These are not optional anymore.
Data Scientists and AI Engineers Must Learn the Domain, Deeply
A model built without genuine understanding of how banking operations work, how fraud typologies evolve, how internal controls are structured, and where the human vulnerabilities lie, will produce alerts that operations teams ignore within six months. The technical and domain communities must work in far closer proximity than most organizations currently allow. Embed your AI team in risk. Embed your risk team in AI development. Shadow each other. Build together. The boundary between these disciplines is where the best fraud detection capability is forged, and where most institutions are currently failing.
Leaders Must Create the Conditions for AI to Actually Work
AI systems will surface uncomfortable things about trusted people, long-standing processes, and high-performing branches or teams. The instinct to explain away an alert about a senior or well-regarded employee is powerful and deeply human. Leadership must actively and visibly build a culture where every alert is treated with the same rigor regardless of who it involves, where the model’s signal is never suppressed for political convenience, and where acting on AI-generated intelligence is rewarded, not quietly discouraged. Without this, even the most sophisticated fraud AI becomes very expensive decoration.
Skilling Up Is Not a One-Time Training Event. It Is a Continuous Operating Discipline.
Fraud evolves. Models drift. New typologies emerge. The institutions that win this fight are the ones that invest in continuous learning: regular model retraining, adversarial red-teaming, ongoing upskilling of risk analysts, and a culture of intellectual curiosity about what the data is trying to say. Attend the courses. Get the certifications. Run internal war-games where your risk team tries to defeat your own fraud models. Build learning loops, not just systems. Treat your people’s AI fluency as a risk asset that requires the same maintenance and investment as the technology itself.
Lastly..
The best AI has to offer is not efficiency. It is the ability to see what human systems are designed, or incentivized, to miss.
But it only delivers that capability when the humans working alongside it are skilled enough, empowered enough, and genuinely curious enough to act on what it reveals.
The real questions every institution should be asking today are not whether they have an AI fraud solution. The real questions are: Do our risk teams know how to work with it? Do our data teams understand the fraud domain well enough to build it right? And do we have a culture courageous enough to act on what it finds, even when that is uncomfortable?
If the answer to any of these is uncertain, that is the gap to close, before the next case, not after.
The tools exist.
The data exists.
The science exists.
What the moment demands is people skilled enough to wield these tools with precision, leaders bold enough to act on their output, and institutions brave enough to let the data speak, even when it says something no one wants to hear.
Insider fraud is not a new problem.
Our capacity to detect and prevent it has never been greater. The only question is whether we build the human capability to match it.


