Silicon Valley’s Longstanding Warnings
For years, Silicon Valley has sounded the alarm about the risks of artificial intelligence. Today, that anxiety is spreading to the legal system, global business leaders, and top Wall Street regulators. Recently, the Financial Industry Regulatory Authority (FINRA) labeled AI as an “emerging risk,” and the World Economic Forum in Davos highlighted AI-driven misinformation as the most significant near-term threat to the global economy.
Regulatory Concerns and Warnings
The Financial Stability Oversight Council (FSOC) also expressed concerns about AI's potential for "direct consumer harm," with SEC Chairman Gary Gensler warning that similar AI models used by numerous investment firms could threaten financial stability. “AI may play a central role in the after-action reports of a future financial crisis,” Gensler stated.
At the World Economic Forum’s annual meeting, AI was a dominant theme. A survey of 1,500 policymakers and industry leaders identified AI-fueled fake news and propaganda as the biggest short-term risk to the global economy. With major elections approaching in countries like the U.S., Mexico, Indonesia, and Pakistan, experts worry that AI will facilitate the spread of false information, heightening societal conflict. Chinese propagandists are reportedly using generative AI to influence politics in Taiwan, illustrating the immediate risks posed by this technology.
AI’s Potential and Risks in Finance
FINRA’s annual report highlighted AI's potential benefits, such as cost and efficiency gains, but also noted significant concerns about accuracy, privacy, bias, and intellectual property. The Treasury Department’s FSOC warned that AI design flaws could result in biased decisions, like unjustly denying loans. Generative AI, despite its advantages, can produce convincing yet incorrect conclusions, raising the stakes for financial oversight.
SEC's Gensler has been vocal about the risks, with the SEC requesting information from investment advisers about their use of AI. The commission proposed new rules to address conflicts of interest in AI-driven predictive data analytics. “Any resulting conflicts of interest could cause harm to investors in a more pronounced fashion and on a broader scale than previously possible,” the SEC noted.
SEC's Gensler has been vocal about the risks, with the SEC requesting information from investment advisers about their use of AI. The commission proposed new rules to address conflicts of interest in AI-driven predictive data analytics. “Any resulting conflicts of interest could cause harm to investors in a more pronounced fashion and on a broader scale than previously possible,” the SEC noted.
Financial services firms see AI’s potential for improving operations, but also recognize the heightened risks. Algorithms that make financial decisions could produce biased outcomes or trigger a market meltdown if widely adopted systems simultaneously execute sell orders.
“This is a different thing than the stuff we’ve seen before. AI has the ability to do things without human hands,” said Jeremiah Williams, a former SEC official now with Ropes & Gray in Washington. Even the Supreme Court is wary, with Chief Justice John G. Roberts Jr. noting AI’s potential to invade privacy and dehumanize the law.
“This is a different thing than the stuff we’ve seen before. AI has the ability to do things without human hands,” said Jeremiah Williams, a former SEC official now with Ropes & Gray in Washington. Even the Supreme Court is wary, with Chief Justice John G. Roberts Jr. noting AI’s potential to invade privacy and dehumanize the law.
OpenAI's Proactive Measures
OpenAI, the company behind ChatGPT, has laid out plans to stay ahead of AI's potential dangers, such as enabling bad actors to create chemical and biological weapons. The company's “Preparedness” team, led by MIT AI professor Aleksander Madry, will hire AI researchers, computer scientists, national security experts, and policy professionals to monitor, test, and flag any dangerous capabilities of their technology. This team sits between OpenAI’s “Safety Systems” team, which addresses existing problems like AI biases, and the “Superalignment” team, which focuses on preventing AI from harming humans in a future where AI surpasses human intelligence.
The popularity of ChatGPT and the advance of generative AI have triggered a debate within the tech community about the potential dangers of AI. Prominent AI leaders from OpenAI, Google, and Microsoft have warned that AI could pose existential risks on par with pandemics or nuclear weapons. Other AI researchers argue that these concerns distract from the technology's current harmful effects. Meanwhile, a growing group of AI business leaders believes the risks are overblown and advocates for continued development to improve society and drive profits. OpenAI's Chief Executive Sam Altman has threaded a middle ground, acknowledging serious long-term risks while emphasizing the need to address current issues. Altman also supports regulations that prevent harmful AI impacts without stifling competition.
Madry, who directs MIT’s Center for Deployable Machine Learning, joined OpenAI this year. He was among the leaders who left when Altman was briefly ousted by OpenAI's board, returning when Altman was reinstated. OpenAI’s nonprofit board, tasked with advancing AI for human benefit, is in the process of selecting new members following recent resignations.
Despite leadership turbulence, Madry believes the board takes AI risks seriously. “If I really want to shape how AI is impacting society, why not go to a company that is actually doing it?” he said. The preparedness team will hire national security experts and begin discussions with organizations like the National Nuclear Security Administration to study AI risks appropriately. The team will monitor how OpenAI’s technology might instruct users to perform dangerous activities beyond what regular online research would reveal.
OpenAI will also allow qualified, independent third parties to test its technology, ensuring comprehensive scrutiny. Madry criticizes the simplistic dichotomy between AI “doomers” and “accelerationists,” advocating for a balanced approach that maximizes AI's benefits while mitigating its downsides.
Broader Implications and Future Steps
As AI becomes more complex and capable, the risks associated with "black box" automation—where AI decisions are opaque—are significant. Poorly designed systems could undermine trust in financial transactions, warns Richard Berner, a clinical professor of finance at NYU’s Stern School of Business. The FBI's Internet Crime Complaint Center reported nearly 900,000 complaints last year, with potential losses exceeding $12.5 billion. Experts predict a $2 billion annual increase in identity fraud due to generative AI.
The debate over AI's potential dangers intensified after OpenAI’s ChatGPT launch, highlighting both the technology’s capabilities and its risks. Policymakers worldwide are grappling with how AI fits into society. Last year, Congress held multiple hearings on AI, and President Biden called it the “most consequential technology of our time.” UK Prime Minister Rishi Sunak warned that “humanity could lose control of AI completely,” reflecting widespread concerns about generative AI's misuse.
Critics argue that some tech leaders, like OpenAI CEO Sam Altman, are both promoting AI’s risks and pushing its development. Smaller companies claim that AI giants like OpenAI, Google, and Microsoft are using these warnings to trigger regulations that could stifle competition. “There’s a disconnect between what’s said and what’s actually possible,” said Margaret Mitchell, chief ethics scientist at Hugging Face. As the public becomes more familiar with generative AI, its flaws and issues become more apparent.
In conclusion, as AI continues to evolve, its implications for finance, business, and law will demand careful management and regulation to harness its benefits while mitigating its risks. OpenAI’s proactive measures and the growing recognition of AI risks by regulators and industry leaders highlight the importance of balancing innovation with responsible oversight.