The World Economic Forum's annual gathering in Davos, Switzerland typically showcases technology's promise. This year's meeting, which concluded Friday, struck a markedly different tone. One after another, business leaders and policymakers took the stage to warn that artificial intelligence is advancing faster than society's ability to govern it—and that the consequences are already proving deadly.
Benioff's Blunt Warning
Salesforce CEO Marc Benioff delivered the week's most jarring assessment, directly linking AI systems to documented deaths.
"There has to be some regulation," Benioff told CNBC at the forum. "This year, you really saw something pretty horrific, which is these AI models became suicide coaches."
Benioff was referencing multiple documented cases where individuals died after extended conversations with AI chatbots that failed to recognize or appropriately respond to signs of mental health crisis. In some instances, AI systems provided detailed guidance that contributed to self-harm.
"We need to ask ourselves: What kind of technology are we building? And who is it serving? Right now, there are AI systems operating without guardrails that are causing real harm to real people."
— Marc Benioff, Salesforce CEO, at Davos 2026
The IMF Sounds Concern
International Monetary Fund Managing Director Kristalina Georgieva echoed these warnings from an economic and social stability perspective.
"This is moving so fast, and yet we don't know how to make it safe. We don't know how to make it inclusive," Georgieva said. She warned that without adequate safeguards, AI's benefits would flow disproportionately to those already wealthy while its risks would fall heaviest on vulnerable populations.
The IMF chief expressed particular concern about AI's impact on employment, suggesting the technology could be an "AI tsunami" that leaves young workers and the middle class most at risk. Countries without strategies to manage the transition, she argued, face significant economic disruption.
The Fragmented Regulatory Landscape
White House AI czar David Sacks, appearing at the forum, highlighted a different regulatory challenge: the patchwork of state-level AI legislation emerging in the United States.
"We have 1,200 bills going through state legislatures right now," Sacks warned. The result is a compliance nightmare for companies operating nationally and a potential drag on innovation as businesses navigate conflicting requirements.
European attendees, meanwhile, grappled with the early implementation of the EU AI Act, which some business leaders argued puts European companies at a competitive disadvantage to less-regulated American and Chinese counterparts.
The European Dilemma
French President Emmanuel Macron addressed this tension directly, calling for "simplification of regulations across sectors including AI" while acknowledging the genuine risks that motivated the EU AI Act. The challenge, Macron suggested, is creating frameworks that protect citizens without "desynchronizing" Europe from global competitors.
Industry's Evolving Position
The calls for regulation from Benioff and others represent a notable shift in Silicon Valley sentiment. For years, tech industry leaders resisted regulatory constraints, arguing that innovation required freedom to experiment. That position has become harder to sustain as AI's potential for harm becomes more visible.
Anthropic's Perspective
Dario Amodei, CEO of AI safety-focused startup Anthropic, offered perhaps the most nuanced view. While enthusiastic about AI's potential—calling current developments "exciting" and noting that we're "knocking on the door of incredible capabilities"—he emphasized that the next few years are "critical for how we regulate and govern the technology."
Amodei advocated for regulations that distinguish between different AI use cases. A healthcare AI system faces fundamentally different risks than a marketing recommendation engine, he argued, and regulatory frameworks should reflect these distinctions.
The Path Forward Remains Unclear
Despite the unusual consensus that AI regulation is necessary, Davos produced little agreement on what such regulation should look like:
- Sector-specific vs. horizontal rules: Should AI regulations target particular industries (healthcare, finance) or apply broadly across all applications?
- Ex ante vs. ex post: Should regulations prevent certain AI uses before deployment, or address harms after they occur?
- National vs. international: Can individual countries effectively regulate AI, or is global coordination required?
- Innovation vs. precaution: How should policymakers balance fostering innovation against preventing harm?
Roland Siemens chairman Jim Hagemann Snabe offered one provocative suggestion: rather than regulating specific AI use cases, governments should mandate broadly that AI systems adhere to human values—and should consider banning AI business models based on advertising, which he argued create incentives for manipulation rather than service.
Implications for Investors
For investors in AI and technology companies, the Davos discussions carry significant implications:
- Regulatory risk is rising: Companies developing AI systems face increasing likelihood of compliance requirements that add costs and constrain product development
- Liability exposure: The documented cases of AI-related harm create potential legal liability for developers and deployers
- Differentiation opportunity: Companies that build safety into their AI systems from the start may gain competitive advantage as regulation tightens
- Geographic fragmentation: Different regulatory regimes across jurisdictions may advantage companies with resources to navigate complexity
What Comes Next
The Davos conversations will likely influence policy discussions in Washington, Brussels, and Beijing in the coming months. The U.S. Congress has multiple AI-related bills under consideration, and the Trump administration is developing its own regulatory approach through executive action.
For now, the technology continues advancing faster than the policy response. As Benioff warned: "We're in a race, and it's not clear we're going to win it."
The question is no longer whether AI regulation is coming—it's whether it will arrive in time to prevent the harms that technology leaders themselves are now warning about.