Tenet Threat Labs recently captured three live attacks targeting enterprise AI agents — prompt injection, CoT goal manipulation, and MCP-layer exploitation. None were flagged by conventional tools. Active exploitation isn't coming - it's already here.
"Agents are doing exactly what they're allowed to do.. just in a way they shouldn't." That line captures the core challenge of securing AI agents at runtime, and it's where the conversation with Barak Sternberg begins.
Tenet Security has been named a Hot Company in GenAI Agentic Application Security at the InfoSec Awards by Cyber Defense Magazine, recognizing our platform's approach to securing autonomous AI agents at runtime — a gap traditional security tools are not built to address.
Tenet Security wins two Gold Awards at the Cybersecurity Excellence Awards in the Most Innovative Cybersecurity Company and Agentic AI Security categories, recognizing our work building the industry's first Runtime Defense Platform for autonomous AI agents.
Tenet Research declassifies a critical S3/CDN flaw in Hugging Face that exposed private LLM weights via a "header-override" exploit. This case proves that static configs aren't enough—runtime defense is the only way to secure the agentic baseline. Full teardown here:
Enterprise AI has moved from simple chat to autonomous agents that query DBs and execute code. This power brings a new threat: Agentjacking. These multi-stage attacks subvert an agent's reasoning, turning its autonomy into a weapon against your infrastructure.