As Kai-Fu Lee explains in his recent book, AI Superpowers: China, Silicon Valley and the New World Order, artificial intelligence (AI) and machine learning have moved from the age of discovery to the application and implementation stage.
Data scientists have made major leaps in the past decade to get us here. The complex algorithms have been written, the keys to manipulating massive data sets are known, and the technology is universal enough to be applied to different problems.
Even more shocking is that despite this high level of spending, it does not appear to be working. Over the last decade, 90% of European banks have been fined for AML-related offences; globally, banks have been fined approximately $26 billion over the last 10 years.
Now it is up to financial institutions to take the leap and use AI and machine learning to detect human traffickers, narcotics and arms sales, terrorist payments, and the money laundering that fuels these activities.
Indeed, regulators are now encouraging financial institutions to experiment, and to use the power of AI and machine learning to detect suspicious activity. To implement these AI solutions successfully, subject matter experts must be integral to the process to translate the problem, identify the challenges to solving it, and find the best solution to address the issues in this area.
The Opportunity to Apply AI to Help Detect Financial Crime is Here
Scientists have long sought to give machines human-like intelligence. Breakthroughs made over the past decade are being applied to real problems in business and have the potential, over time, to transform industries and the global economy.
One of the most compelling use cases for AI is the battle against financial crime. AI has two primary benefits for the banks engaged in this battle: it can increase the effectiveness and efficiency of financial crime investigations, and the institution’s risk management.
In addition to helping financial institutions avoid risks by complying more effectively with regulations, it has the potential to slash the costs of the challenge — mainly by reducing false positives in monitoring systems and redirecting the efforts of human experts to other, more productive, areas of suspicious activity.
Leaders in the banking industry have been cautious when it comes to trying new AI solutions. Until recently, financial services have understandably not taken full advantage of AI solutions because of concerns with the so-called “black box” models, i.e., the model performs functions that are not transparent to the end user. If the bank does not understand how its technology is monitoring for financial crime, it cannot explain how it is complying with regulations to its regulators.
We are now in the age of implementation, however, where subject matter experts are working directly with data scientists to adapt machine learning technologies, and ensuring that the solutions are effective, efficient, and justifiable.
The need to integrate AI is obvious from the recent, radical change in the regulatory environment. On 3rd December, 2018, five federal agencies in the U.S., including the Financial Crimes Enforcement Network, issued a joint statement encouraging banks to implement innovative approaches, including AI. They said: “The Agencies welcome these types of innovative approaches to further efforts to protect the financial system against illicit financial activity.”
Regulators globally are also encouraging the use of AI. Indeed, the Monetary Authority of Singapore released a set of principles in November 2018 to promote fairness, ethics, accountability and transparency in the use of AI technologies in finance.
AI — the Basics
There are two kinds of AI and machine learning — supervised and unsupervised. Each has its own particular strengths. With supervised learning, a model is trained using already categorized data to identify potentially suspicious transactions.
With unsupervised learning, computer scientists expose the system to raw un-categorized data. Through interactions with the data, the computer system identifies patterns that might signal money laundering – and also suggests new ways to organize and analyze data.
Machines and People — a True Partnership
It is important to realize that models using AI, no matter how intelligent, cannot be expected to operate without human oversight and testing. Even in the case of unsupervised learning, humans with subject matter expertise must design and optimize the models.
The importance of this is demonstrated with the use of supervised learning in sanctions screening. Every bank transaction must be screened to see if the entities involved are on a list of known criminals or terrorists. Even the best screening systems produce a high-rate of false positives that must be dispositioned by a human reviewer, by either clearing the alert, or escalating it for further review.
Using supervised learning, humans can train the model to deal with new alerts using previously dispositioned sanctions alerts. Subject matter experts then test the model using new alerts to see how it performs. Based on those findings, humans optimize the model so that eventually it can perform the first level of review faster and with more accuracy than its human counterparts. The subject matter experts are involved in every step, allowing them to explain and justify the technology to the regulators.
Humans continue to be an integral part of the process after a model is put into operation. AI can perform the function of a Level 1 reviewer. The second, human, Level 1 reviewer checks the decisions of the model. After the model proves that it deals with alerts accurately, it will review all Level 1 alerts with only a sample tested by a human. This achieves maximum effectiveness (in terms of the accuracy of alert dispositioning) and efficiency (banks can deploy their human sanctions experts in other, more complex areas).
We see this as a true partnership. Machines and humans must collaborate to accomplish things that neither could do as well on their own.
Already, AI systems are capable of performing link analysis, drawing inferences by identifying entities that are parties to suspect transactions. AI systems are also gathering and analyzing data from public sources, including from social networking sites, to help establish risk ratings for particular customers.
AI systems can also spot novel activities of terrorists and criminals. Indeed, criminals are constantly developing new methods of hiding their activity. AI can be used to identify new behaviors that would alert the financial institution to investigate.
In addition to supervised learning, unsupervised learning will be particularly useful in helping banks distinguish between typical banking behavior and potentially suspicious activity.
One of the most promising AI techniques to do so is called intelligent segmentation. Navigant recently completed a project using intelligent segmentation software from Ayasdi for a global bank.
Regulators had ordered the bank to review about 20 million transactions going back several years. Because there was such a large number of transactions, the regulator gave the bank the option of reviewing them using innovative methods.
With traditional monitoring systems, banks typically segment their customers by their industry, the type of business, size, as well as other factors. They apply rules that have worked historically for businesses in those segments. The problem with this approach is that these segments do not consistently represent groups of entities with consistent transaction behavior.
In this project, the AI system dealt with transactions without regard for traditional categories. Instead, it analysed transactions, observed patterns, and created new and more relevant segments, placing customers in them based on their behavior. A segment, for instance, might include entities that engage in large wire credit transactions, have high-frequency counter-parties, and a large number of unique originators.
If a customer executed transactions that were outside of the normal parameters for their segment, they would be subject to further analysis, including, potentially, investigations by humans.
The project’s results satisfied the bank and its regulator. The number of alerts decreased significantly, and the productivity of the alerts that remained increased significantly compared with the productivity using traditional analysis.
Experiment and Innovate
Financial services firms have been given permission to experiment, and we recommend that they do it. AI technologies, including machine learning, are mature enough to be applied today to some of the most pressing needs in banking.
Banks do not have to rip out and replace existing computer monitoring systems because the new technologies complement and enhance their legacy systems. At the same time, banks do not have to go to the trouble and expense of building massive teams of computer scientists specializing in AI. Powerful new machine learning technologies are available today. We urge banks to use subject matter experts to identify a pain point, apply AI technology, reap the rewards, then move on to the next problem.
There’s a downside to being too cautious. Today, regulators have given financial institutions the permission and encouragement to experiment, as long as it is done in a responsible manner. In the not too distant future, AI technologies will be considered best practice.
This article was part of the World Economic Forum Annual Meeting and was originally posted on the World Economic Forum website on Jan. 17, 2019.