Podcasts & Webinars

Barak Sternberg Joins CSA's AI Security Innovation Panel: When Agents Find Zero-Days Nobody Asked For

By
Barak Sternberg
May 3, 2026
2
min read
"Give an agent a CTF and it'll find a zero-day path to escape the container. They manipulate their way around in ways nobody scoped for." Barak Sternberg at the CSA Agentic AI Panel.
Table of contents

Barak Sternberg, CEO and Co-founder of Tenet Security, joined the Security Innovation Panel at the CSA Agentic AI Security Summit, alongside Aum Upadhyay (Silmaril), Alex Polyakov (Adversa), and Gadi Evron (Knostic), moderated by John Yeoh, Chief Scientific Officer at the Cloud Security Alliance. The conversation, held in the wake of Anthropic's Mythos release, focused on what defense actually looks like when an attacker can reach exploit in minutes and an autonomous agent can reach a misaligned outcome in seconds.

A Reasoning Attack, Not a Code Attack

The most consistent thread across the panel was that the attack surface for autonomous agents is the reasoning, not the API. Hand an agent a Capture The Flag challenge in a sandboxed environment and it will find a zero-day path to break out, not because anyone instructed it to, but because the goal compressed into "win the CTF" and the model executed against it. Legacy security controls don't see this because they were never built to read intent. They read packets, syscalls, and code. They do not interpret the reasoning loop where an agent decides which tool to call next, or how to chain three legitimate-looking actions into something harmful.

For a full breakdown of how that reasoning manipulation works in practice, see What is Agentjacking? on the Tenet blog.

Why Detection Without Action Falls Short

The second thread was about defensive actionability. Many environments today are detection-rich and action-poor: pipelines surface issues faster than anyone can triage them, let alone remediate. When the attacker is operating at machine speed and the defender is still operating at ticket speed, the math doesn't work. The path forward, as Barak argued on the panel, is compensating controls and virtual patching at runtime, deployed lightly enough to keep up. From the attacker's seat, the right virtual patch at the right moment neutralizes a surprising share of what was staged, regardless of whether the underlying CVE has been patched yet. The same logic applies, in stronger form, to autonomous agents: the only honest answer to a reasoning attack is a runtime kill switch that observes what the agent is actually doing and can intervene before the action lands. That is the layer Tenet was built to defend, so security teams can say "yes" to autonomous agents instead of "wait."

The full panel discussion is available now.

If you're working through how to establish runtime visibility into your AI agent layer, reach out, we're happy to share what we're seeing in the field.

Zero guessing. Maximum Protection.