Asset managers are already implementing artificial intelligence to enhance their compliance operations, and one firm touting the use cases of AI in the money management industry is financial services giant Fidelity Investments.
Fidelity’s application, called Saifr, uses natural language processing to help portfolio managers, sales and other staff review communications that will be shared or presented to investor clients or the larger public.
Last year, Fidelity Labs introduced Saifr as a means of identifying “potential brand, reputational, and regulatory risks” in such materials, according to a white paper from the firm.
Saifr: ‘Grammar Check for Compliance’
The technology can be used in real time while creating communication materials— such as emails, presentations or speeches— or used to scan materials afterward as another compliance layer, in conjunction with standard review by the firm’s compliance department, Vall Herard, the CEO and co-founder of Saifr, a Fidelity Labs company, says in a phone interview.
“Broadly speaking, compliance processes tend to be fairly manual and take a lot of time and, quite honestly, tend to be fairly inefficient,” Herard says. “That includes everything from emails that could potentially contain insider information,” to marketing materials shared with investor clients or even the text of a speech, he explains.
“All of those things are subject to regulatory rules in that you cannot be providing misleading information that could potentially be harmful to investors, ” Herard says. “ Think of it as a grammar check for compliance. Above and beyond that, we can also detect instances of disclosures that may be required. ”
Saifr not only highlights potential compliance violations, such as investment risks that need to be disclosed, but can also suggest alternative wording.
In addition to the preparation of sales decks, presentations and emails, Fidelity can also use the software to check for potential concerns when creating materials for new funds it is launching, Herard adds.
“The idea is that someone who is not a compliance officer who is creating that content, by the time they send it over to the compliance team, it’s closer to being compliant,” Herard explains. “So you can turn compliant content around much, much faster. ”
The Saifr technology originated from a company incubator, Fidelity Labs .
Ready to Supplement, Not Replace
Dan Sondhelm, CEO of Sondhelm Partners, which helps institutional asset managers, wealth managers and other financial services firms with marketing, public relations and sales strategies, emphasizes how important it is for AI to supplement, rather than replace, the investment acumen of staff.
“There are a lot of ways companies are using AI,” Sondhelm says of asset managers. “One very common use is creating reports and shareholder communications.”
For instance, staff can create articles on stocks held within their investment portfolio, and artificial intelligence can make or recommend adjustments to the report, as needed. “ Depending on the amount of adjustments, [firms will] say it was written by ChatGPT or, depending on the level of edits, [they] may say nothing,” Sondhelm says .
Firms also use AI, or predictive analytics, to understand which clients may terminate them or to identify if a prospective client is likely to buy from the firm in the future, based on engagement history, according to Sondhelm.
At the other end of the spectrum, Sondhelm says compliance departments are “very concerned that AI is going to be used inappropriately.” Sondhelm explains that he once received a shareholder report via email from an asset manager, essentially highlighting a specific stock, and it was clearly written by AI.
“As I started reading it, it sounded very robotic,” Sondhelm says. “ I looked at the bottom of the email, and it said, ‘written by ChatGPT.’ It was an email that was a stock spotlight, without their perspective at all. It was literally like reading Wikipedia. I thought, ‘ Why would an asset manager send this out?’”
In cases similar to Fidelity’s use of Saifr, Sondhelm says that it shows the benefits of what AI can do— with human safeguards.
“They didn’t fire their compliance department. It’s a tool,” Sondhelm says. “It’s less work and a second set of eyes. ”
Use Carefully, for Now
For Andrew Brzezinski, head of data strategy, business intelligence and analytics at Fidelity Investments, “any new technology comes with risks.”
“One specific risk with generative AI today: You can’t always trust the results,” Brzezinski says via email. “This technology is moving fast and is poised to benefit pretty much every industry. But the difference between what even the most advanced apps can do today and human intelligence is still vast. These tools don’t actually understand what they’re saying. In the case of text-based generative AI, these tools are only making statistical predictions about sequences of words, based on large volumes of text that were used to train them. That’s important for anyone using these tools to know. ”
In July, the SEC proposed new rules related to potential conflicts of interest caused by the use of AI technologies as they interact with investors. The rules would require broker /dealers and investment advisers to take certain steps to “eliminate or neutralize the effect of any such conflicts,” stated the regulator’s announcement.
Companies will identify and develop additional uses for AI, but Brzezinski does not expect it to happen overnight.
“There’s potential for AI to streamline back-office operations and compliance procedures, but it hasn’t truly manifested yet,” Brzezinski says. “That’s largely due to some of the regulatory constraints that come with compliance, record maintenance and related procedures. Right now, we need to think about AI as nascent technology. So while it’s important for firms and portfolio managers to try this technology, it’s also prudent to choose use cases carefully. ”
Brzezinski sees prudence in employing AI in situations such as internal-facing communication or low-level data analysis.
“Consider use cases that help with existing business processes, that play to the strengths of current AI capabilities (like drafting, summarization, translation, ideation) and that minimize downside risk,” he says. “For example, consider applying AI to less sensitive data, or to internal-facing use cases. ”