x
N A B I L . O R G
Close
Security - August 18, 2025

Urgent Call for AI Regulation: 81% of UK CISOs Fear Chinese AI Chatbot DeepSeek Could Trigger National Cyber Crisis

Urgent Call for AI Regulation: 81% of UK CISOs Fear Chinese AI Chatbot DeepSeek Could Trigger National Cyber Crisis

Growing apprehension among top cybersecurity officials, specifically concerning the Chinese AI company DeepSeek, is causing widespread concern.

Artificial Intelligence (AI) was initially hailed as a revolutionary catalyst for business efficiency and innovation. However, for those at the forefront of corporate defense, it’s casting ominous shadows.

A majority (81%) of Chief Information Security Officers (CISOs) in UK-based organizations believe that urgent government regulation is needed to control DeepSeek due to its potentially hazardous implications. They fear that without swift action, the technology could trigger a national cyber crisis.

This isn’t mere speculation; it reflects genuine concern about a technology whose data handling practices and potential for misuse are causing alarm at the highest levels of enterprise security.

The findings, compiled for Absolute Security’s UK Resilience Risk Index Report, are based on a survey of 250 CISOs from large UK corporations. The data suggests that the theoretical threat of AI has materialized and is now a pressing concern for CISOs. Their responses have been decisive.

In an unprecedented move, over a third (34%) of these security leaders have already imposed bans on AI tools due to cybersecurity concerns. A similar number, 30 percent, have already discontinued specific AI implementations within their organizations.

This withdrawal is not a display of Luddism but a pragmatic response to an escalating predicament. Businesses are already grappling with complex and hostile threats, as evidenced by high-profile incidents like the Harrods breach. CISOs are finding it challenging to keep pace, and the introduction of advanced AI tools into the attacker’s arsenal is a challenge many feel inadequately equipped to handle.

The primary concern with platforms like DeepSeek lies in their potential to expose sensitive corporate data and serve as weapons for cybercriminals.

Two-thirds (60%) of CISOs anticipate an increase in cyberattacks due to DeepSeek’s proliferation. An identical proportion reports that the technology is already complicating their privacy and governance frameworks, making an already challenging job nearly impossible.

This has led to a shift in perspective. Once viewed as a potential solution for cybersecurity issues, AI is now seen by a growing number of professionals as part of the problem. The survey reveals that 42 percent of CISOs now consider AI to be a greater threat than an aid to their defensive efforts.

Andy Ward, SVP International at Absolute Security, stated: “Our research underscores the significant risks posed by emerging AI tools like DeepSeek, which are rapidly reshaping the cyber threat landscape.

As concerns grow over their potential to accelerate attacks and compromise sensitive data, organizations must act now to strengthen their cyber resilience and adapt security frameworks to keep pace with these AI-driven threats.

That’s why four in five UK CISOs are urgently calling for government regulation. They’ve witnessed how quickly this technology is advancing and how easily it can outpace existing cybersecurity defenses.”

Perhaps most alarming is the admission of unpreparedness. Nearly half (46%) of the senior security leaders admit that their teams are not ready to manage the unique threats posed by AI-driven attacks. They are witnessing the development of tools like DeepSeek outpacing their defensive capabilities in real-time, creating a dangerous vulnerability gap that many believe can only be closed by national-level government intervention.

“These are not hypothetical risks,” Ward continued. “The fact that organizations are already banning AI tools outright and rethinking their security strategies in response to the risks posed by LLMs like DeepSeek demonstrates the urgency of the situation.

Without a national regulatory framework – one that sets clear guidelines for how these tools are deployed, governed, and monitored – we risk widespread disruption across every sector of the UK economy.”

Despite this defensive posture, businesses are not planning a complete withdrawal from AI. The response is more of a strategic pause rather than a permanent stop.

Businesses recognize the immense potential of AI and are actively investing to adopt it safely. In fact, 84 percent of organizations are prioritizing the hiring of AI specialists for 2025.

This investment extends to the very top of the corporate ladder. 80 percent of companies have committed to AI training at the C-suite level. The strategy appears to be a dual-pronged approach: upskill the workforce to understand and manage the technology, and bring in the specialized talent needed to navigate its complexities.

The hope – and it is a hope, if not a prayer – is that building a strong internal foundation of AI expertise can act as a counterbalance to the escalating external threats.

The message from the UK’s security leadership is clear: they do not want to block AI innovation, but to enable it to proceed safely. To do that, they require a stronger partnership with the government.

The path forward involves establishing clear rules of engagement, government oversight, a pipeline of skilled AI professionals, and a coherent national strategy for managing the potential security risks posed by DeepSeek and the next generation of powerful AI tools that will inevitably follow.

“The time for debate is over. We need immediate action, policy, and oversight to ensure AI remains a force for progress, not a catalyst for crisis,” Ward concludes.