How Irregular's $80M Funding is Shaping the Future of AI Security
Techcrunch3 hours ago
910

How Irregular's $80M Funding is Shaping the Future of AI Security

AI Security
aisecurity
funding
startup
innovation
technology
Share this content:

Summary:

  • Irregular raised $80 million in funding, led by Sequoia Capital and Redpoint Ventures, valuing the company at $450 million.

  • The startup focuses on securing AI models by identifying and mitigating emergent risks through advanced simulations.

  • Their SOLVE framework is widely used in the industry for evaluating model vulnerabilities.

  • Security is a critical concern as AI capabilities grow, with models increasingly finding software vulnerabilities.

  • Irregular's work is cited in evaluations for major AI models like Claude 3.7 Sonnet and OpenAI's o3 and o4-mini.

Irregular Secures $80 Million to Fortify Frontier AI Models

On Wednesday, AI security firm Irregular announced a significant $80 million funding round, led by Sequoia Capital and Redpoint Ventures, with participation from Wiz CEO Assaf Rappaport. A source close to the deal revealed that this investment values the company at $450 million.

Co-founder Dan Lahav emphasized the urgency, stating, "Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction, and that’s going to break the security stack along multiple points."

Formerly known as Pattern Labs, Irregular is already a major player in AI evaluations. Their work is cited in security assessments for models like Claude 3.7 Sonnet and OpenAI's o3 and o4-mini. The company's framework, SOLVE, for scoring model vulnerability-detection ability, is widely adopted across the industry.

Beyond addressing existing risks, Irregular is focusing on emergent risks and behaviors before they manifest in real-world scenarios. They've developed an elaborate system of simulated environments to intensively test models pre-release.

Co-founder Omer Nevo explained, "We have complex network simulations where we have AI both taking the role of attacker and defender. So when a new model comes out, we can see where the defenses hold up and where they don’t."

Security is a top priority in the AI industry, with frontier models posing increasing risks. OpenAI recently overhauled its internal security measures to combat potential corporate espionage. Additionally, AI models are becoming more adept at finding software vulnerabilities, impacting both attackers and defenders.

Lahav concluded, "If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models. But it’s a moving target, so inherently there’s much, much, much more work to do in the future."

Comments

0
0/300
Newsletter

Subscribe our newsletter to receive our daily digested news

Join our newsletter and get the latest updates delivered straight to your inbox.

ListMyStartup.app logo

ListMyStartup.app

Get ListMyStartup.app on your phone!