50949_WKBW_7_Problem_Solvers_658x90.png

Actions

AI safety report warns industry is 'structurally unprepared' for rising risks

The Winter 2025 AI Safety Index, released Wednesday by the Future of Life Institute, evaluated the safety protocols of eight major AI developers.
AI safety report warns industry is 'structurally unprepared' for rising risks
Datacenter hardware
Posted

A new independent assessment of AI safety practices across the industry’s biggest players is raising alarms about how far behind companies remain as their models rapidly advance.

The Winter 2025 AI Safety Index, released Wednesday by the Future of Life Institute, evaluated the safety protocols of eight major AI developers, including the makers of ChatGPT, Gemini, and Claude, and concluded that many firms “lack the concrete safeguards, independent oversight and credible long-term risk-management strategies that such powerful systems demand.”

The analysis examined dozens of indicators across six domains, covering everything from companies’ risk assessments and model transparency to whistleblower protections and existential-risk planning. It highlights existential-risk planning specifically as a growing gap because “proactive planning… has become a pressing need.”

The expert panel assigned letter grades to each company based on the safety indicators. Only two companies earned a passing C grade, and each of them only barely. Anthropic, the highest-ranked company on the list, received a C+ grade, as the lowest-ranked, Alibaba Cloud, received a D-.

“We’ve started to see this great division between the companies that are on the top tier, where they're doing more in a publicly transparent way, up against the companies that we think there is room for improvements,” Sabina Nong, an AI Safety Investigator for the Future of Life Institute, told Scripps News in an interview. “We’ve seen more state-of-the-art practices that are adopted by the other companies.”

Researchers say the divide reflects deeper differences in how organizations prioritize safety. Some companies have begun adopting baseline measures such as watermarking AI-generated images or publishing model cards. But others still lack clear governance structures, independent evaluation, or policies to protect employees who raise safety concerns.

RELATED STORY | AI is now screening prison communications to forecast crimes

The report warns that voluntary safety frameworks have not kept pace with the speed at which new, more capable models are being released. It’s even more concerning, Nong says, when the advance of superintelligence is looming.

“We all know that companies have set their ambitions to march towards superintelligence, to march toward AGI, where some of the tech CEOs have referred to it as a new species,” Nong told Scripps News. “We are potentially creating something that is more powerful than human power can actually monitor and control. And I think with these companies’ dreams, there definitely comes more responsibilities where we would love to see companies having more concrete plans of control.”

Many companies say they’re working on stronger safeguards in newer versions of their models, and the report acknowledges that improvements are emerging. But the authors caution that these changes are incremental, while model capabilities are accelerating sharply.

“This widening gap between capability and safety leaves the sector structurally unprepared for the risks it is actively creating,” the report warns.

READ MORE | Lawmakers press tech and health experts on AI safety and data privacy, hoping to shape future regulation

To close that gap, the Index recommends that AI companies expand transparency around internal testing and risk assessments, use independent, third-party safety evaluators, strengthen protections for researchers and whistleblowers, address emerging risks like AI “psychosis” and harmful hallucinations, reduce lobbying efforts that hinder regulation, and release clearer crisis-management and long-term safety plans.

In addition to the desire for more transparency, researchers emphasize the “dire need” for regulation, as enforcement mechanisms remain slow and inconsistent.

For now, the authors say consumers are left relying on tools that are more capable than ever, but backed by safety systems that remain “incomplete, inconsistent, and underdeveloped.”