x
N A B I L . O R G
Close
AI - July 18, 2025

Can AI Advancements Ensure Both Speed and Safety?

Can AI Advancements Ensure Both Speed and Safety?

Title: Navigating the Safety-Velocity Paradox in Artificial Intelligence Development: A Call for Change in Industry Practices

In a recent public critique, Boaz Barak, an Harvard professor on leave and researcher at OpenAI, condemned the launch of xAI’s Grok model as “completely irresponsible.” The criticism was centered around the lack of transparency, particularly the absence of a public system card and detailed safety evaluations, which are becoming industry standards.

However, a post by former OpenAI engineer Calvin French-Owen, penned three weeks after his departure from the company, sheds light on a different perspective. French-Owen’s account suggests that numerous individuals at OpenAI are actively engaged in safety research, focusing on potential threats such as hate speech, bio-weapons, and self-harm. Yet, he highlights an underlying issue: most of this work remains unpublished, and OpenAI “should do more to get it out there.”

This revelation complicates the initial narrative of a responsible actor criticizing an irresponsible one. Instead, it exposes the industry-wide dilemma known as the Safety-Velocity Paradox—a deep-seated conflict between the need for rapid advancement and the moral imperative to proceed with caution.

French-Owen paints a picture of OpenAI as a dynamic organization, having tripled its workforce to over 3,000 in a single year, operating under immense pressure from a “three-horse race” for Artificial General Intelligence (AGI) against Google and Anthropic. This competitive environment fosters a culture of extraordinary speed but also secrecy.

The development of OpenAI’s Codex, a coding agent, serves as an example of this velocity. French-Owen describes the project as a “mad-dash sprint,” where a small team rapidly developed a groundbreaking product in just seven weeks, at the cost of sleep and leisure time.

This scenario underscores the human toll of such rapid advancement. In an environment moving this fast, the slow, meticulous work of publishing AI safety research can feel like a distraction from the race.

The Safety-Velocity Paradox is not born out of malice but rather a complex interplay of competitive pressure, cultural values, and measurement issues. The need to be first, the DNA of these labs as loose groups of scientists and tinkerers valuing breakthroughs over methodical processes, and the difficulty in quantifying averted disasters all contribute to this paradox.

In the boardrooms of today, the visible metrics of velocity often drown out the silent successes of safety. Moving forward, it is not about placing blame but about redefining what it means to deliver a product. We need to make the publication of a safety case as integral as the code itself. Industry-wide standards that discourage competitive penalties for diligence are necessary to turn safety from a feature into a shared, non-negotiable foundation.

Above all, we need to cultivate a culture within AI labs where every engineer—not just those in the safety department—feels a sense of responsibility. The race to create AGI is not about who gets there first; it is about how we get there. The true winner will not be the company that is merely the fastest, but the one that demonstrates to the world that ambition and responsibility can, and must, coexist.