On Friday, August 22, Tower Research Ventures hosted cybersecurity researcher Vineeth Sai Narajala for a discussion on novel threat vectors in generative AI tooling. Vineeth recently joined Meta to work on cybersecurity for the firm’s ML research, and is a notable contributor to open source cybersecurity initiatives with OWASP. He previously worked on cybersecurity at AWS Bedrock.
LLMs and their agent counterparts are becoming more and more capable, creating a huge opportunity to complete tasks faster and more effectively but they simultaneously pose a new type of threat to enterprises looking to employ or serve them. Vineeth provided historical context and useful frameworks for large enterprises seeking to navigate agentic work.
The discussion touched on some of the most fundamental questions facing this topic. When are AI agents internal threats? Or external threats? How would any enterprise know? Ever since the internet enabled disparate devices to connect and transmit to one another, businesses, customers, and hackers have played a game of whack-a-mole to exploit, patch, and secure critical vulnerabilities in software. Over the past 40 years, cybersecurity and cryptographic research has consistently increased the amount of trust we feel comfortable giving over to machines. But Vineeth argued that the core distinction between existing software and generative AI is the latter’s probabilistic nature – unlike traditional software, where builders can generate hashes, signatures, “software bills of materials,” and other guardrails to guarantee certain actions, LLMs offer no such capabilities.
This is why most enterprises are viewing LLMs and agents as akin to human actors – potentially sophisticated, but also potentially malicious and corruptible. However, even modern human security factors (tool-based permissioning, temporal permissioning, role-based permissioning/OAuth, etc.) are insufficient, largely due to the speed at which an agent can do damage.
Firms are responding to this risk in various ways. On one extreme, some enterprises are modeling generative tools as extensions of the caller i.e., your “agent” should inherit your own static permissions. The risk here is that any single agent can ingest and potentially exfiltrate a lot of PII (your emails, your calendar, your ability to read/write sensitive data, etc). On the other extreme, some particularly ambitious enterprises are attempting to give agents “just-in-time” dynamic credentialing. This approach poses its own issues – mainly that the breadth of tasks (and therefore necessary permissions) an agent can accomplish is fairly wide to begin with. Furthermore, even time-boxed permissioning pushes up against latency limits when an agent can operate in intervals of CPU cycles, as opposed to seconds or minutes.
Vineeth also discussed common types of attacks with modern products like MCP servers and clients, as well as where he believes enterprises should invest in cybersecurity research. Tower Research Ventures is actively investing in cybersecurity products built with and for agentic AI. If you’re working on something in the space, please reach out to ventures@tower-research.com

