By Chris Perry Contributor
Let’s think about some framework for deciding when and where it’s appropriate to apply AI.
Shutterstock
Although it pains me to say this as a longtime technology disciple and fintech booster, companies need to create policies about when not to use technology—specifically artificial intelligence.
I make my living selling technology solutions to financial companies. So it’s in my interest to convince professionals in wealth management, asset management, capital markets, and, indeed, all areas of financial services, to adopt new technology solutions whenever and wherever possible. This mentality has given rise to billions of dollars of value creation and enabled the democratization of finance.
However, as artificial intelligence proliferates across the business world and society, I find myself frequently encountering examples of situations where companies would have been better served spurning AI in favor of old-fashioned “HI,” or human intelligence.
Some of these examples are all too obvious and all too common. By now, we’ve probably all been forced to interact with an AI-powered chatbot when what we really wanted was to talk to a human client service rep who could quickly answer our question or fix our problem. Recently, I’ve also encountered communications from companies that were obviously (and poorly) written by AI. Those experiences have mainly been on social media, where apparently some companies are content to let their brands be defined by unedited, AI-generated text. My first thought is: Are you underestimating the intelligence of your clients?
These examples of annoying or excessive AI use might turn off some consumers to a specific company or brand. But misapplications of AI can also have much more serious consequences. Artificial intelligence is still very much a work in progress. AI models hallucinate, they make mistakes and they can be influenced by dangerous biases introduced from training data, algorithmic formulations and human designers. Given these shortcomings, the consequences of an AI-generated mishap could be catastrophic for financial services companies who hold consumers’ wealth, must earn their trust every day and operate under tight regulations.
MORE FOR YOU
TALK: A Decision-Making Framework for AI
So despite my enthusiasm for technology in general—and my strong recommendations in past columns for individuals to start experimenting with AI as a time-saver and personal assistant—I think it’s important to pause for a moment. Let’s think about some framework for deciding when and where it’s appropriate for companies to apply AI, and when it might be better to stick with human intelligence and more traditional methodologies.
This framework should be applicable to both external-facing functions that directly impact customers and internal applications involving a company’s operational systems.
Of course, the first element of any business decision-making framework is a cost-benefit analysis. In a growing number of cases, companies will find that AI solutions are simply more economically attractive than human alternatives, at least in the long run. Next, companies must look at the AI solution from a risk-management perspective. Are there certain core functions in the business that are simply too important (or too susceptible to error or compliance issues) to fully entrust to AI?
Finally, companies must institutionalize some guardrails that prevent cost-savings incentives from driving these decisions entirely, ensuring that senior management takes non-financial criteria into account. Companies need some system that requires decision-makers to gather information and opinions from a broad range of employees about the wisdom and implications of using AI in specific applications.
This system can be laid out with the acronym “TALK”:
T — Think about all the ways switching to an AI solution will affect workflows, customers, culture and the brand.
A — Ask the business team most directly involved if they can think of any detrimental effects the switch to AI could have in those same categories (workflows, customers, culture and brand).
L — Leverage colleagues from across the organization, including business teams, technology experts, senior management, sales and client service, and other areas to gather opinions about applying the AI solution to this particular business issue.
K — Kick AI to the curb when the team’s HI is better than AI. Your clients deserve the respect of human intelligence.
Humans Still Bring Something Important
That last element is most important. At a time when AI is tackling problems, cutting costs and unlocking new opportunities, it takes a bold decision-maker to assess a situation and determine that, no, in this instance, it makes sense to hit pause on the AI solution and stick with our human intelligence.
In that case, having results from a formal, institutionalized decision-making process will allow business leaders and other decision-makers to more easily justify their determination to say no to AI. That ammunition will be particularly valuable when there is a significant cost savings associated with the AI solution.
As a technology salesperson, I hope companies don’t decide to spurn the tech-based solutions too often. But sometimes it’s worth remembering that we humans do bring something important to the table, too.
Editorial StandardsReprints & Permissions