Irregular Secures $80 Million Funding to Enhance AI Security
On Wednesday, the AI security landscape took a significant leap forward as Irregular, a burgeoning security firm, announced a substantial $80 million in funding. This investment round was led by industry heavyweights Sequoia Capital and Redpoint Ventures and included contributions from Assaf Rappaport, the CEO of Wiz. This latest round of funding has positioned Irregular at a valuation of approximately $450 million, marking a noteworthy milestone in the realm of AI security.
The Vision Behind Irregular
Dan Lahav, co-founder of Irregular, shared insights into the company’s compelling vision. He suggests that the future trajectory of economic activity is increasingly going to stem from the interactions between humans and AI, as well as between AI systems themselves. “That’s going to break the security stack along multiple points,” Lahav told TechCrunch, emphasizing the necessity for robust security measures as AI capabilities expand.
Transitioning from Pattern Labs
Originally founded as Pattern Labs, Irregular has already carved a niche for itself in the field of AI evaluations. The company’s methodologies are referenced widely in security assessments for innovative models, including Claude 3.7 Sonnet, and OpenAI’s o3 and o4-mini systems. At the core of Irregular’s framework is SOLVE, a scoring system that evaluates a model’s capacity to detect vulnerabilities, which has become a vital tool across the industry.
A Focus on Emergent Risks
While Irregular has made significant strides in assessing existing model risks, the firm is now looking to tackle a more ambitious challenge: the identification of emergent risks and behaviors before these issues become problematic. To achieve this, Irregular has developed complex simulated environments that allow for extensive testing of AI models prior to their deployment in real-world scenarios.
“We have complex network simulations where AI takes on both the roles of attacker and defender,” explained co-founder Omer Nevo. This proactive approach enables Irregular to analyze how defenses hold up against potential threats, thus safeguarding the deployment of new AI models.
The Growing Importance of AI Security
The AI sector has witnessed a sharp increase in focus on security due to the rising risks associated with frontier models. Following the emergence of various vulnerabilities, companies like OpenAI have revamped their internal security protocols to fortify against potential threats, such as corporate espionage. The dual ability of AI models to discover software vulnerabilities adds another layer of complexity and urgency to the conversation around security.
The Ongoing Challenge of Securing AI Models
For the founders of Irregular, this recent funding marks just the beginning of a long journey filled with security challenges resulting from the rapid evolution of large language models. Lahav posited, “If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models.” With the landscape continuously changing, he notes that “there’s much, much, much more work to do in the future,” highlighting the relentless effort required to stay ahead in the security arena.
Upcoming Events in the AI Industry
As the AI industry gears up for future innovations and discussions, events like the upcoming TechCrunch event in San Francisco from October 27-29, 2025, will play a crucial role in shaping the dialogue around AI developments and security.
In summary, Irregular’s recent funding illustrates not only the confidence investors have in the future of AI security but also the proactive measures being taken to address the complexities posed by advancing AI technologies. The intersection of AI’s capabilities and security needs creates an exciting yet challenging landscape that will require ongoing scrutiny and innovation.
Inspired by: Source

