How Technological and Cultural Trends are Reshaping Trust in Financial Institutions
Our concepts of trust and stability have been deeply challenged in recent years. After exiting a global pandemic that upended our trust in government decision-makers and highlighted the fragility of accepted norms, we plunged headfirst into a combination of macro-economic crises with decades-high inflation levels and geo-political unrest in multiple regions.
In finance – the very thing that many of us associate most with stability and security for us and our loved ones, whether we like it or not – we are deeply challenged with which institutions to trust. A recent study by Gallup showed that US consumers trust very few institutions, with the average percentage of U.S. adults who have a great deal or quite a lot of confidence in nine institutions falling to an all-time low of 26% in 2023.
Financial institutions and trust
When it comes to consumers’ trust in financial institutions, the situation is no better. Only 26% of consumers have either “a great deal or quite a lot of confidence in banks” – lower than institutions such as the army, police, or the Supreme Court.
This lack of trust is the case for the younger generation to an even greater extent. For example, when it comes to Gen-Z trusting banking services from banks compared with other digital services providers, the data shows that PayPal and Apple garner more trust than their current primary bank or credit union.
The very recent news about a group of hackers claiming they’ve stolen the details of tens of millions of bank accounts from Santander and selling them for as little as few cents each, exemplifies best why gen-Zs feel that way.
While present trends are indeed challenging, in the coming years the concept of trust will face even deeper, unprecedented, challenges due to several significant technological, regulatory, and even cultural trends reshaping the financial markets:
Decision-making is moving from humans to algorithms: The widespread integration and usage of AI models in numerous financial applications includes risk scoring, loan and policy underwriting, investment advice, and fraud prevention. This new dogma only began very recently when we started handing over critical decisions from humans to algorithms.
The exponentially diminishing cost of fraud: The emergence of Large Language Models (LLMs) and diffusion models, capable of generating highly believable artificial data at a low cost poses a grave threat to trust as fraudulent activities utilizing such data become more prevalent and sophisticated. One email security vendor recorded a more than 12x increase in phishing emails since the launch of ChatGPT, from Q4 2022 to Q3 2023 alone.
Standardization of financial data and payment access: The advent of open banking further complicates matters, as standardizing access to sensitive financial information and payment initiation introduces new vulnerabilities to cyberattacks and fraudulent schemes.
Increasing velocity and irreversibility of moving money: The increasing popularity and adoption of real-time payment services such as RTP and FedNow among financial institutions. While providing a better and much faster service, these transactions are often irreversible, making it virtually impossible to recover stolen funds. This means that trust, once given, is very hard to reverse.
Increased connectivity among financial players: Finally, the interconnectivity facilitated by APIs across multiple networks creates a landscape where recovering fraudulently acquired funds becomes exceedingly challenging, undermining trust in the financial system’s security and integrity.
The vectors of trust
As the concept of trust evolves, we’ve developed a framework called the “vectors of trust” to map the main interactions between FIs (Financial Institutions ) and different stakeholders in the context of trust. This framework enables us to map the different solutions and find areas where we believe solutions are still missing:
The 6 Vectors of Trust for Financial Institutions
Customer to FI
The first and most straightforward one is the vector of how customers trust the FI, which is fundamental for maintaining client relationships and ensuring continued business. In the context of AI, it means that now customers also need to trust FIs to use the technology in a way that is fair and free of the risk of hallucinations. For example, they need to trust that the credit scoring technology used to approve and price loans is trained on an unbiased and PII-less dataset or that the investment advice they read isn’t just based on hallucinated information that doesn’t exist. We believe that FIs will have to deploy measures to ensure that the likelihood of such cases are minimized and their effect contained. Still, there must also be customer-targeting tools (think “Credit Karma for AI”) that will help educate consumers regarding the AI-enabled financial service they receive. We believe that this use case will be dominated by established players that already have access to vast amounts of consumer data.
With the rise of RTP, customers will also need to trust that FIs are moving money to the correct accounts, whether receiving or sending. Since transactions on these new rails are irreversible, FIs need to correctly, and quickly, verify the accounts they are moving money between.
Example use cases: loan pricing, account verification, financial education and literacy
Representative companies: Orum, TunicPay, TrustMi, Verituity, Array, Meniga, ClearScore
FI to Customer
Only slightly less obvious is the trust that a FI places in its customers: when a customer opens an account, requests a loan, or directs the FI to perform a task, FIs have a set of procedures and technologies to ensure the information is genuine and a third party hasn’t taken over the account for fraud or money laundering. With the increasingly easy and low-cost ability to generate fake but highly credible identities, including ID cards, selfies, and “deep-fake” video and audio, FIs are faced with new challenges of protecting themselves from these tech-enabled illicit activities. Granted, some of these challenges should be solved by traditional KYC, AML, and anti-fraud tools, but new threats will require new solutions. Clarity, which develops a technology to detect videos that were manipulated by GenAI tools, stands out as a prime example of the new tools FI must embrace. We are excited about this area, and believe there will be numerous cases for companies to shine.
Example use cases: Know your customer for onboarding and ongoing monitoring, detection, and reporting of money laundering, account takeover prevention, Crypto fraud, custody and compliance
Representative companies: Clarity, Sardine, Inscribe, Fireblocks, Ballerine, Footprint, Parcha, Alloy, Hawk
FI Internally
One of the challenges we believe FIs will face in the new AI era, especially given pressure on boards and top management to deploy AI as a strategic tool, will be using AI in a compliant but scalable way. In other words, the FI needs to trust itself. This self-trust extends to how FIs manage risks associated with AI technologies, including data privacy and algorithmic bias. Other examples are how FIs will use AI to self-regulate.
One of the key challenges that needs to be tackled is ensuring the performance, explainability, and fairness of an AI-based product, which can be a daunting task for a highly-regulated FI. Concurrently, these products need to be deployed in a scalable manner that takes days, not months. As a result, we believe that AI model validation technologies will be widely used by Fis. Another challenge is for FIs to monitor their communication with customers and partners to detect market manipulation or abuse over channels that are constantly evolving (WhatsApp, Telegram, Signal, etc.) and using new forms of data such as emojis, stickers, voice recordings, and videos. We predict that FI will have little choice but to use sophisticated AI models to monitor these channels.
Example use cases: model validation, data cleansing, communication surveillance
Representative companies: CitrusX, FairPlay, Fiddler, Yields, Shield FC, LeapXpert, Cable
FI to Vendors
While FIs have long been aware of the need to manage third-party risk (e.g., TPRM), recent developments in AI have significantly evolved this practice. FIs will increasingly consume training data and services leveraging highly sophisticated AI algorithms for predictive or generative purposes and will need to fully trust these data and services. This includes trusting that the data is unbiased, unharmful and licensed for use, as well as additional aspects of fairness and explainability of the AI algorithms. We foresee FIs using tools to enforce compliance with these high levels of trust among their third-party providers of data and algorithms.
Example use cases: Third-party risk management, data and algorithms management
Representative companies: Prevalent, Securiti, SkyFlow, Themis, Solidatus
FI to FI
FIs usually place little trust in each other. However, with the ease of simultaneously opening an account remotely in multiple FIs, one of the most viable solutions to the mounting difficulty of combatting fraud this state of mind must be re-evaluated.
We believe that the use technologies that enable collaboration without compromising PII or other sensitive customer data may one of the few viable solution. Furthermore, each FI may invest more in a single identity-validation effort but if done well, it will be done only once and the resulting verified identity could be used repeatedly thus lowering overall cost. Although it may initially sound counter-intuitive from a pure business perspective (sharing the fruit of an FI’s work with another FI), we believe that such collaboration technologies for FIs to lower the cost and increase the accuracy of identity checks will be crucial to address the future of trust among FIs.
Example use cases: Shared data networks, distributed identity
Representative companies: Portabl, Indy Kite, Indicio, CapStack, Access Fintech
Regulators to FI
Lastly, regulators’ trust plays a pivotal role in how FIs use AI. FIs must demonstrate their responsible use of AI to regulatory bodies, ensuring compliance with laws and regulations while fostering trust in the integrity of their operations. As regulators are always playing “catch-up” with technological advances, we believe they will utilize independent tools to monitor FIs.
Example use cases: Regulatory reporting
Representative companies: AQMetrics, Regnology, Cube
The landscape of trust solutions
The advent of technologies such as AI, RTP, and open banking is creating a unique change in the concept of trust. FIs, consumers, businesses, and regulators realize that significant aspects of decisions they entrusted to humans for a millennium will irreversibly move to be made by machines and algorithms. Many entrepreneurs have already identified this trend and the challenges it poses and are full steam ahead in building the infrastructure and solutions to address them.
We mapped some of the most interesting companies that are active in this space and a selection of the use cases they are targeting. However, we readily acknowledge that this initial foray into mapping new vectors of trust and the startups operating within them will expand. This category is only in its infancy, and plenty of companies will take advantage of the opportunities it presents. If you are the founder of one of these companies – come talk to us. We can’t wait to hear your pitch!
−−−−−−−−
References:
¹https://www.finextra.com/newsarticle/44234/hackers-claim-to-have-bank-account-details-of-30m-santander-customers
²https://slashnext.com/state-of-phishing-2023/