Imagine a scenario where banking “bots go wild” in the financial sector — chatbots calling wrong customers, false positives in flagged transactions wasting time and expense to resolve, or systematic biases that eliminate worthy borrowers from the lending pool. What if instead of building customer trust and loyalty, a bank’s artificial intelligence (AI) and machine learning (ML) program destroys it? What if an organization is making strategic decisions or creating policies around erroneous data?
While AI/ML can create unquestionable value to an organization, it can equally destroy it. The difference between value creation and destruction may lie in effective governance.
The proliferation of artificial intelligence (AI) and machine learning (ML) tools in financial services is undeniable, with companies seeking to use innovation for cost savings, greater insights, and tapping the wealth of data at their fingertips. Banks are beginning to realize the benefits of these innovations, capitalizing on more efficient and accurate model reporting, and the ability to systematically detect data patterns/relationships that might unlock growth opportunities or identify unknown risks. AI/ML is no longer a pipe dream; roughly 80% of banks with more than $150 million in assets have evaluated the use of ML — many have already deployed tools and features built around this technology.
What should banks consider when embarking on an AI/ML journey to enhance the model risk management system?
Special thanks to Brian Karp, who contributed to this article.