F5 acquires CalypsoAI: what it means for AI security, for F5, and for customers
On September 11, 2025, F5 announced an agreement to acquire CalypsoAI for approximately USD $180 million, with closing expected by F5’s fiscal fourth quarter ending on September 30, 2025. Today, the deal was closed. The deal will be primarily financed with cash, and according to F5, it is expected to have a minimal impact on revenue and operating results in the short term. The transaction integrates CalypsoAI’s enterprise-ready “test–defend–observe” stack into the F5 Application Delivery and Security Platform (ADSP), to protect applications, APIs, as well as models and agents.
Why is this an interesting acquisition?
CalypsoAI is known for agentic red-teaming at scale, which automates adversarial testing against models, applications, and agents. It also offers runtime “guardrails” that detect prompt-injection and jailbreak attempts, along with observability across AI interactions. In simple terms, it provides both the offensive and defensive sides of modern AI security: it continuously finds vulnerabilities and transforms those findings into enforcement logic. This dual capability is important because large language models (LLMs) and agents are non-deterministic, and traditional controls that rely on predictable inputs and outputs tend to fail during inference. CalypsoAI’s approach was specifically designed for that shifting landscape.
F5, meanwhile, has expanded beyond load balancing into comprehensive security for multicloud and AI-powered applications, including an “AI Gateway” for policy enforcement, data loss prevention, and integration with security information and event management (SIEM) and security orchestration, automation, and response (SOAR) tools. Combining CalypsoAI’s runtime and red-team capabilities with this gateway provides F5 with end-to-end coverage, from the proxy layers enterprises already deploy to AI-native checkpoints around models, applications and agents.
The internal briefing materials paint a similar picture: F5 plans to productize “F5 AI Guardrails” for runtime defense and “F5 AI Red Team” for continuous adversarial testing, with observability as a key element. The positioning explicitly targets prompt injection, jailbreaks, data leakage, and escalation of agent privileges, threats that conventional tools overlook.
What does it mean for the security market?
First, this raises the bar for blocking controls during inference. A year ago, most security plans focused on risks during model training, data management, and supply-chain security. Today, the attacks that worry CISOs: prompt injection, cross-model data leakage, and model exfiltration, happen during inference. By adding CalypsoAI to the current F5 setup, inference security shifts from a niche feature to something that can work alongside proxies, API gateways, and web application firewalls that many companies already use. Expect competitors in application delivery, cloud security, and observability to accelerate their own runtime AI defenses or pursue similar mergers and acquisitions.
Second, it consolidates a fragmented category. AI security has been a patchwork of red-team consultancies, gateway startups, data-loss specialists, and monitoring vendors. F5 has already begun integrating data-in-transit protection through its February 2025 acquisition of LeakSignal; CalypsoAI completes the runtime and offensive-testing layers. The trend is clear: buyers will prefer platforms that unify policy, detection, and enforcement across models, applications, and agents, and APIs rather than combining multiple tools.
Third, the acquisition may establish “AI guardrails” as a standard control category. CalypsoAI’s public materials, F5’s launch posts, and industry coverage all highlight the frequency of new attack patterns directly feeding into enforcement. This feedback loop: discover, learn, enforce, could become a basic requirement for enterprise-grade AI security within the next year.
What does it mean for F5?
Strategically, F5 expands its core promise, “deliver and secure every app and API”, to include “every model and agent,” keeping customers within the F5 control plane as they deploy generative and agentic AI. The internal briefing roadmap makes this clear: unified AI policy management under ADSP; runtime inspection and enforcement; and ongoing red-teaming that strengthens defenses over time. This approach gives F5 multiple ways to stand out: proximity to traffic (where enforcement is most effective), existing enterprise relationships, and integration with SIEM/SOAR for workflows customers already use.
Commercially, it creates up-sell opportunities from BIG-IP and NGINX estates to AI Gateway and Guardrails/Red Team subscriptions. It also positions F5 to collaborate with security operations (SecOps), platform, and data teams, expanding buying centers beyond traditional network teams. While F5 states that the deal is financially insignificant in the short term, it aligns the product portfolio with future spending trends: AI enablement with safety guarantees.
What does it mean for customers of both companies?
For F5 customers, the immediate benefit is a clearer, native way to manage AI interactions without involving another vendor at the edge. Policies that already manage APIs and web apps can be extended to prompts and model responses, logging, redacting, or blocking sensitive content in real time, enforcing least privilege for applications and agents, and feeding compliance evidence into audit pipelines. The AI Gateway’s ability to inspect prompts/responses and integrate with SIEM/SOAR becomes more valuable when supported by CalypsoAI’s continuously updated attack intelligence.
For CalypsoAI customers, the advantages are scalability and stability: global support, expanded data-plane coverage (from on-premises to multicloud), and closer integration with traffic management layers that monitor every request and response. The internal plan indicates that CalypsoAI’s “Defend/Red-Team/Observe” will continue under F5’s brands, maintaining functionality while expanding distribution and lifecycle coverage (assessment, testing, runtime enforcement). Some customers may watch for clearer pricing and roadmaps as products converge, but the overall direction, deeper platform integration rather than sunsetting, appears positive.
Industry influence and likely next steps
If you zoom out, three shifts are likely:
- Inference security becomes a core discipline: Expect wider adoption of forward-proxy and middle-proxy patterns described in F5’s technical outlook, placing controls where prompts, embeddings, vector database calls, and tool invocations occur, not just at application perimeters. Vendors in observability and data security will advocate for these same chokepoints.
- Continuous red-teaming guides runtime policy: CalypsoAI’s model, using agents to generate over 10,000 new attack patterns each month, assess risk, and turn findings into guardrail rules, sets a practical benchmark. Standards organizations (like the Open Worldwide Application Security Project (OWASP) AI guidance) and frameworks such as the MITRE ATLAS mapping are expected to align around measurable inference controls and auditable response behaviors.
- Platform consolidation speeds up: With LeakSignal (data in transit) and CalypsoAI (inference guardrails/red-team) under one roof, F5 has outlined a strategy others might follow: unify data protection, API posture, and AI-specific defenses into a single operational platform. That reduces integration effort for enterprises and raises the standard for point solutions to demonstrate unique value.
Bottom line
F5’s acquisition of CalypsoAI is notable not for its size but for its fit. It takes a mature distribution and policy platform and integrates AI-native security where enterprises need it most: during inference, in real time, across diverse models, applications, and agents. For security leaders, the message is clear: the market is converging around platforms that can continuously red-team, learn, and enforce at the edges where they operate. For F5 and CalypsoAI customers, it offers fewer gaps, fewer consoles, and a clearer path to deploying AI safely at scale.