
A list of prominent investors including Andreessen Horowitz (a16z) and the OpenAI Startup Fund have poured $43 million into Adaptive Security, a new startup promising technology to counter the surge in deepfake social engineering and AI-powered threats.
The startup, founded by serial entrepreneurs Brian Long and Andrew Jones, is building a security platform designed to replicate real-world attack scenarios through AI-generated deepfake simulations. The pitch is to provide businesses with a testing ground to beef up internal human defenses.
The $43 million early-stage round, co-led by a16z and OpenAI’s venture capital arm, also included equity stakes for Abstract Ventures, Eniac Ventures, CrossBeam Ventures and K5.
The Abstract deal is OpenAI’s first-ever investment in a cybersecurity company. The funding round also included
Long and Jones, whose track records include exits at TapCommerce and Attentive, is betting there’s a big enterprise market for security tooling to thwart sophisticated, AI-generated personas that can impersonate executives or employees and launch multi-pronged attacks across corporate communication channels.
“With the right models and data, we can simulate realistic AI attacks, train employees to recognize threats, triage suspicious behavior in real time, and surface risk before it turns into loss,” the company said in a note announcing the capital raise.
At its core, Adaptive is pitching a platform to manage AI-powered deepfake attack simulations, real-time threat triage, next generation security training, and AI-driven risk scoring to corporate defenders.
The company said the product is capable of simulating real-world attacks using AI-generated personas across voice, SMS, and email. Much like existing security awareness programs delivered via video, the idea is test employees against deepfake social engineering lures.
“If they fall for it, our system flags the risk instantly and delivers personalized training on the spot,” the company said.
The Adaptive platform is also promising real time threat triaging when someone at a company reports a suspicious message. “Our AI doesn’t just forward it to IT, it analyzes the message in real time, scores the risk, and helps security teams act fast.”
The company said the platform also includes generative-AI tools for businesses to create tailored content in seconds (text, visuals and video) based on any topic or internal policy.
Related: Bureau Raises $30M to Tackle Deepfakes, Payment Fraud
Related: Inside a Hacker’s Playbook – How Cybercriminals Use Deepfakes
Related: Surf Security Adds Deepfake Detection Tool to Enterprise Browser
Related: Reality Defender Banks $33M to Tackle AI-Generated Deepfakes
About The Author
Original post here