Will AI Compromise Security for Institutional Investors?

Integrating artificial intelligence is all the rage, but organizations must consider what vulnerabilities it might expose and how they can plug them.



Institutional investors who are considering using artificial intelligence for their own marketing, investment research or back-office needs need to make sure these tools are not leaking sensitive financial data or competitive intelligence online. They must also consider if and how their external money managers, and other vendors—like investment accounting and risk management providers—are using AI to enhance their business operations, sources shared.

Fred Teufel, a director at Vigilant Compliance, a Philadelphia-based firm that focuses on the investment management industry, says large asset owners, like pension funds, endowments and foundations, “need to be asking their investment advisers about their use of technology, and more specifically AI.”

Institutional investors should ask their money managers whether they are using a third-party vendor, for instance, that leverages AI for portfolio construction. If AI is used to help pick stocks, investors also “need to understand how that portfolio is being built, where that data is coming from, and how that data is being managed,” Teufel says.

“Also, is that data accurate?” he adds. Asset owners “need to understand what is going on inside the black box. In language they can understand.”

Want the latest institutional investment industry
news and insights? Sign up for CIO newsletters.

Proposed SEC Rules

This is especially true given the Securities and Exchange Commission’s proposed rules related to predictive analytics technology, including artificial intelligence. Proposed in July, the rules would require broker/dealers and investment advisers to take steps to address conflicts of interest that may arise from these technologies.

This has consequences for asset owners, not only due to their relationship with external money managers, but because the proposed rules apply to institutional investors as well, Teufel explains.

“There’s a lot of industry pushback on that rule, and it changes so many different things,” Teufel continues. “The SEC in the past has differentiated between institutional investors and retail investors. In its current form, they don’t differentiate between that in this rule. … Expanding the application of the rule to institutional investors, or institutional clients, is a huge scope change in terms of the way that conflicts of interest and fiduciary matters are being managed.”

The Vanguard Group is one of a group of money managers that have suggested the proposed rules would be overly broad and too restrictive for investment firms. Matthew Benchener, a managing director at Vanguard, wrote in an October comment letter to the SEC that the firm shared “the view of the Investment Company Institute and SIFMA, that applying the proposal to sophisticated institutional investors, such as registered funds and similar pooled investment vehicles, is unnecessary and could harm investors in these products.”

Benchener’s letter continued: “A ‘covered technology’ would include any analytical, technological, or computational function, algorithm model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes. … This proposed definition encompasses virtually any feature or communication designed to influence investment-related behaviors from investors.”

Cybersecurity at Issue

Aside from the regulatory scrutiny of AI, sources shared specific cybersecurity concerns of which asset owners should be aware.

Mohammad Rasouli, a Stanford University AI researcher who also helps institutional investors use AI for alternative investments, says that risks introduced by the technology can include threats from hackers and other digital threats. Scammers could theoretically hack large language models like ChatGPT to obtain sensitive information published in the system, he explained.

Research has also shown that it is possible to extract sensitive information from these platforms using specific prompts, he added.

“When you train large language models with some data, it learns that data, and someone can extract that data with smart prompts,” Rasouli says.

To beef up their security, some institutional investors have begun to run large language models on their own servers and databases, “so they have full control over it,” he said.

In addition to external threats, there is also the risk of an internal data breach when using AI tools. An example could be a private equity fund using AI to scan large amounts of due diligence materials to analyze trends and, in the process, inadvertently exposing sensitive data, like nondisclosure agreements, to unintended parties, Rasouli explains.

Mark Nicholson, a principal in Deloitte & Touche LLP, also says the use of generative AI technology could result in unintended exposures—even when users have set up certain risk controls to try to prevent this.

“[Let’s say] a generative AI has been granted a control that indicates that I’m not allowed to reveal a certain individual’s name or social security number,” says Nicholson, the financial services industry leader for Deloitte’s cyber and strategic risk practice.. “But then you ask it to verify if the list is alphabetically accurate. Suddenly, you might be able to grant it the ability to circumvent that [security] control,” he said.

Furthermore, money managers have a network of providers with whom they work, which asset owners should consider, as those parties could also be using AI, impacting the integrity or security of data, Nicholson says. This could include risk management and portfolio management providers, as well as firms offering investment accounting services.

“They have a variety of third, fourth and fifth parties they engage with; it’s a hyperextended network,” Nicholson says. “The question is: Can you trust that network? … Certainly when engaging with third parties, it’s critical to know when they are using AI. Where is data held, how it is held, and where does it go? What is the architecture of the tools that access [your data]?”

Other Vulnerabilities

Joshua Pantony, co-founder and CEO of Boosted.ai, a Toronto-based AI firm that helps investment managers and institutional investors implement machine learning into their investment process, says that as large language models (advanced AI systems pre-trained on large amounts of data) become more accessible to the general public, it will also be easier for bad actors to access them.

“Now we’re going to enter a world where we see sophisticated, phishing emails that try to break into organizations” by targeting AI, Pantony says. “It hasn’t really caught on within the criminal element, but I think that’s going to change in coming years.”

He has seen that institutional investors want to be able to understand how AI models are making their predictions and if they can trust the “thinking” of the model, Pantony says.

“The No. 1 thing holding institutional investors back from using AI is the black-box nature of these things and understanding where this information comes from,” Pantony elaborates. “From our standpoint, we are very careful that it’s primarily professional investors who are using this software. You could misconstrue [the model] as overly confident,” for instance.

A key security concern for asset owners is typically making sure sensitive information, like stock price information, is secure when AI is used.

“That’s increasingly causing a lot of people to invest in private cloud platforms,” Pantony says.

Data security risks investors may be trying to prevent include unintended parties being able to access the large swaths of data that AI may scan to extract or predict market trends.

“Let’s say you are a hedge fund, and you do a lot of management calls,” and AI is used to scan this, he explains. “A lot of times, that’s perceived as being very valuable and key to investment management decisions, and you ideally don’t want other asset managers to know about that.”

The SEC recently launched a sweep of investment advisers, requesting information related to their use of AI and how the technology is overseen, The Wall Street Journal reported this month.

Vigilant’s Teufel, who is aware of the sweep, said, “There’s a lot of pressure on the SEC to do something with AI, and I think they are in the early stages of figuring out what that is.”

“They want to provide some good use cases,” for how AI is being used, Teufel adds. “These examinations, these sweeps, will either prove to the industry that they do or don’t need another rule.”

Jeff DeVerter’s title is chief technology evangelist at Rackspace Technology, a San Antonio-based cloud computing firm whose customers include financial services companies. Similarly, he says investors need to know and consider when money managers are using third-party AI firms, as the security of their systems matters too.

Additionally, for asset owners considering using open-source AI tools within their organizations, they have to consider the level of exposure free software may present.

“A lot of times companies will bring it in, customize it and make it their own,” DeVerter says. “They need to make sure it isn’t leaking data to the internet. … Then you have software-as-a-service [AI tools] that are free, or might as well be free because it’s $20 a month. The primary consideration is: You never, ever want to put your private or secure data in a place where it can be leaked.”

For that very reason, DeVerter has seen many instances of customers building private versions of public AI models, he said.

Asset owners need to consider not only if they will use public or private AI technology, but whether they will run it in their own data centers or in a private cloud environment, DeVerter says.

Related Stories:

Asset Managers Ponder Investments in AI as Security Risks Compound

M&A, VC, AI Activity Expected to Increase in Next 5 Years, per Coller Capital Survey

How Asset Managers Can Harness AI to Boost Profits, per EY

Tags: , , , , , , , , , , , ,

«