Irregular Raises $80M for AI Security: Addressing Emerging Risks in Human-AI and AI-AI Interactions

In an announcement made on Wednesday, AI security firm Irregular Securities revealed $80 million in new funding, led by Sequoia Capital and Redpoint Ventures, with additional investment from Wiz CEO Assaf Rappaport. Sources familiar with the deal have valued the company at $450 million post-funding.
Co-founder Dan Lahav shared with media outlets that Irregular Securities anticipates a significant shift in economic activity towards human-AI and AI-AI interactions, which could potentially disrupt existing security structures across multiple points.
Formerly known as Pattern Labs, the company is already recognized for its contributions to AI evaluation. Its work has been referenced in security assessments for Claude 3.7 Sonnet and OpenAI’s o3 and o4-mini models. The industry widely uses their SOLVE framework for evaluating a model’s vulnerability-detection ability.
While Irregular Securities has focused on addressing existing risks, the company aims to extend its reach by identifying emerging risks and behaviors before they manifest in real-world scenarios. To achieve this, they have developed an intricate network of simulated environments for intensive model testing prior to release.
Co-founder Omer Nevo explains, “We have complex network simulations that involve AI functioning as both attackers and defenders. This allows us to assess where defenses hold strong and where they falter when a new model is introduced.”
The increased sophistication of AI models has brought security concerns to the forefront of the industry, as potential risks associated with advanced models continue to evolve. OpenAI recently revamped its internal security measures to counteract potential corporate espionage threats.
At the same time, AI models are becoming increasingly adept at discovering software vulnerabilities, posing significant implications for both attackers and defenders alike.
For the founders of Irregular Securities, the growing capabilities of large language models represent a new wave of security challenges to address. Lahav states, “If the goal is to create increasingly more sophisticated and capable models, our mission is to secure these models. However, given the evolving nature of this field, there is much, much more work to be done in the future.”