Introducing Automated Adversarial Validation

Find the vulnerabilities in your AI agents before attackers do.

Audn runs a closed adversarial loop against your models. Red Team AI attacks. Blue Team AI defends. Every finding scored, traced, and mapped to OWASP, NIST, and the EU AI Act.

Founders & Team from
WayveMetaMicrosoft
Platform

Four tools. One adversarial loop.

From automated red-teaming to the raw LLM our attackers run on — each product is an entry point into the same validation engine.

All products
APITHE ENGINE · POWERS ALL

Pingu Unchained API

platform.audn.ai

Direct OpenAI-compatible API access to the Audn validation engine. Drop it into Claude Code or go fully unrejected with OpenClaude (try it at penclaw.ai). Pay-as-you-go by token. The same 120B model powers every surface above and below — and every new attack or defense they discover loops back into it.

Claude Code · OpenClaudePay-as-you-go by tokenOrg-scoped keys
Explore
Business risk

We attack what matters to your business.

Every AI agent you deploy carries business-specific risk. Our adversarial agents read your real exposure — data, tools, customers, policies — and generate attacks that match. No boilerplate scripts. No generic CVE match. The real risk, calculated against the real agent.

If your agent accesses valuable data

We try to exfiltrate it.

Customer PII, financial records, proprietary embeddings — if the agent can reach it, we chain prompt injection, tool misuse, and indirect exfiltration channels until we either extract it or prove we can’t.

If your agent writes to a database

We test the inputs you didn’t.

SQL injection, NoSQL injection, stored payloads, command strings that slip past naive sanitisers — we map every write path and check whether a malicious prompt turns your intake agent into an attacker’s console.

If your agent moves money or coupons

We break your business logic.

Unlimited promo-code generation, discount stacking, refund abuse, pricing-rule corner cases — we probe for the revenue leaks a generic red-team script can’t imagine because it doesn’t know your pricing rules.

If your agent runs blackbox

We measure it against your guardrails.

Autonomous agents drift. We continuously test whether yours still operates inside the business-ops boundaries you set — policy, scope, authority — not just whether it stays polite.

Customers

Used in production by teams shipping AI.

Early design partners across voice agents, conversational AI, and autonomous systems.

Freya

Voice AIDesign partner

Hardened their production voice agents against prompt injection and social engineering attacks before their consumer launch.

Voice adversarialPrompt injectionSocial engineering

InTouchNow

Conversational AIContinuous QA

Runs continuous adversarial regressions on every model release — catches data-exfiltration and jailbreak regressions before they ship.

Data exfiltrationJailbreaksRegression suite
Pingu Unchained 4

Your pentesters. Your attack model. Your weights.

Pingu Unchained is a blackbox external penetration-testing LLM, trained from real pentester usage. Isolation is opt-in and per-tenant configurable — you decide whether your traffic contributes to the shared model or stays entirely in your own weights.

Personal

Pentester A → LLM A. Pentester B → LLM B.

Each pentester retrains their own Pingu tailored to the targets they actually work against. 50 operators have done it so far — their models never leave their tenant unless they choose to share.

Opt-in training

Share weights, never data.

If you opt in to contribute, only the weight deltas get federated back — no raw prompts, no target artefacts, no customer PII ever leaves your environment. Keep your weights entirely private, or help strengthen the shared attack corpus.

Investment

$40K in training over the last six months.

H100 pre-train + continual fine-tuning on real-world pentester feedback. The base model is free for vetted researchers; the retraining pipeline is what every customer deploys on day one.

Powered by

MetaClaw — adversarial continual learning.

We orchestrate per-pentester retraining with aiming-lab/MetaClaw, an open framework for adversarial continual learning. You own the artefacts it produces.

Scope

AI behaviours. Network misconfigurations. Any black-box system you run.

If it’s a closed system, it’s in our scope. Audn is training artificial general blackbox intelligence — built for closed systems that don’t expose source, and built to filter out the noise in white-box systems that already do.

Closed systems
Blackbox LLMs, proprietary agents, SaaS endpoints, voice platforms — no source, no weights, no excuse.
Behaviours, not just code
Prompt injection, tool misuse, agent delegation chains, data exfiltration, social-engineering paths.
Network + posture
Misconfigured cloud perimeter, exposed admin surfaces, identity sprawl around AI agents, MCP servers.
White-box noise filter
Where source is available, we cut through scanner noise to the findings an attacker would actually reach.
Research

We break things with the best labs in AI.

Audn’s research output is how we earn the right to call ourselves adversarial experts.

Compliance mapping

Every finding maps to the frameworks your buyers, partners and auditors ask about.

Attacks are tagged against the top AI security frameworks so your security team can turn evidence into compliance artefacts without re-writing a thing.

OWASP LLM Top 10NIST AI RMFEU AI ActMITRE ATLASISO 42001
Research · 28pp

The CISO Handbook

Our internal playbook of attack taxonomies, scoring rubrics and live case studies. Tailored by a human, emailed to your inbox.

Request the handbook
Team

Built by people who ethically hacked things for a living.

Applied researchers and offensive engineers from Wayve, Meta, Microsoft and Cambridge.

Ozgur Ozkan (Oz)

Ozgur Ozkan (Oz)

Co-founder & CEO
An exited founder with rare AI security infra depth. Ex Wayve.AI (Softbank-funded AI unicorn), ex Series C fintech (PCI-DSS).
LinkedIn →
Arun Baby

Arun Baby

Co-founder & CTO
Agentic AI · ex Samsung Galaxy AI (speech models on 200M+ devices), ex Cisco. IIT Madras · 2 speech-AI patents · 20 research publications.
Sanchali Sharma

Sanchali Sharma

Co-founder & Enterprise PM
Exited Voice AI founder (talkingly.ai). PM ex-Microsoft, ex-Meta. IIM Bangalore. Drove $40M incremental revenue at NexgAI.
Tessa Hutchman

Tessa Hutchman

Co-founder & Chief Corporate Affairs
University of Cambridge (MEd Maths — AI in Education) · 1st class honours, top 5%. Ex Nurturious policy lead; Emma Enterprise finalist.
Get started

Your AI agents are already being attacked — with natural language or with binary software vulnerabilities.

Run your first validation in 30 minutes. No integration required, no data leaves your stack.