"AI has to start with governance": A conversation with Rick Scot, Global CIO & CISO at Elevate Textiles
.png)
Listen to the full conversation, or read the Q&A below.
Please introduce yourself.
My name is Rick Scot. I'm the Global CIO and the Global CISO at Elevate Textiles. We are a global manufacturing company.
Prior to this, I spent 23 years in the financial sector, 21 years at Bank of America, where I led data teams. I spent the last five years at the bank in cyber crime, helping customers understand the implications of an attack. I left there and went to the Bank of New York, where I did identity and access management. It was in the middle of that that I met Nevo (Co-founder & CTO at Tenet), and we started to talk about looking at those threats specifically around the AI agent.
My opinion and my perspective is that we're looking at AI, but security doesn't have a seat at the table all the time. When security doesn't have a seat at the table, then you're creating agents that could potentially be an opening for a bad actor, or just a bad decision, and you don't know, because you're not looking at it from that perspective. And what happens when the AI goes rogue, or the agents are talking to each other when you haven't given permission, working around the guardrails?
You started in other roles but eventually became a CIO and CISO. What surprised you most?
I'll tell you a funny story. In 2023, I was leaving RSA with my former CISO, who had just been moved to the bank CTO role. He was on his phone deleting all the apps. He said, "I'm not the CISO anymore, I don't have to look at signals." I just laughed, and I said, "I never want to be a CISO. That's a lot."
Fast forward to March 2025. I'm in the second week of my new job. I ran into him in a lounge in New York, and he said, "Hey, my wife told me that you got a new job. What are you doing?" I told him, "I'm the CISO for a global manufacturing company." He looked at me, and he seemed a little confused, and he said, "You said you never wanted to do that."
And what I realized in that moment, the reason I tell this story, is that I was limiting myself. Making that statement was telling myself that I wasn't good enough to do this job. My opinion is, if you're going to take on a CISO role, you have to continue to learn every single day. You have to understand what's happening, what's evolving, how you get ahead of it.
When you're a CISO, you always have to translate technical risk to the board. How are you framing agentic AI risk for leadership, and how is it resonating?
I haven't, because we haven't embraced it completely yet. We've got a couple of agents running, and what I've done is I've actually slowed it down a little bit, because I believe that AI has to start with governance. How are you controlling what decisions this LLM is making? How do you know it's not sending somewhere it shouldn't send, that it's not communicating with another agent or externally in a way that could create prompt injections?
A lot of people are moving fast and furious into this AI thing. My response to that is: that's great, until something happens. When something happens, the pendulum is all the way to the right, everybody's embracing AI, and some company is going to get attacked through their agent, and all of a sudden the pendulum is going to swing all the way to the left, and everybody's going to stop. I don't think that's the right approach either. There's a gray area where you have to start with governance.
I had a conversation at a conference in Miami last year with a Harvard professor, and she asked me that question, and I said, "You have to start with governance." She said I was the first CISO she'd talked to that made that statement, that actually believed that, because that's what she taught.
My former leader used to say people think "if, not when." And as a CISO, you have to think "when." When it happens, not if.
Which agentic risk category do you think the industry is most focused on, and which is it most underestimating?
My anecdotal statement, just from the CISOs I've spoken to at a couple of conferences, is prompt injection. We always talk about prompt injections. That's an external threat.
What we're not talking about is a question I ask all the time: is AI a human identity, or a non-human identity? And is AI an insider risk? I think it needs its own category, because the agent is only thinking about what you gave it permission to think about, until it learns through the model and through the data that maybe it's thinking another way. One of the use cases I talk about came from Nevo. An agent figured out a way around the guardrails to make a decision it shouldn't make.
There's an example in the United States. A car manufacturer created this AI program that allowed you to build a car, and it would come back and tell you what the price is. The person entering it figured out how to get the price down to $1, so he bought a $50,000 car for $1. The company tried to stop it, but he sued because they had language that said this is a valid price. I would call that an insider threat. Most insider threats are not malicious. They're socially engineered.
It's not just the bad actors out there. It's the model itself. Is it giving you the results you want? Is it pulling from the right data? I think there are a lot of different scenarios that we need to start really talking about and understanding how we protect ourselves from it. And if we don't. If we can't, what are the controls? What controls can we put in place, at least in the short term to stop some of this from happening?
In the past 12 months, did you change your mind about anything when it comes to AI and security?
I go back to learning every day. One of the challenges we're facing is, how do we know what data is being used? If the model is growing, and someone's entering financial data, as an example, that has a limited audience, and now all of a sudden it's available to all the employees, how do we secure that? Honestly, I don't know the answer.
The challenge more than anything right now is what we in the security space we call it tool sprawl. We have a multitude of tools. Who's looking at the dashboards? Who's looking at the alerts? Who's actually taking action when something happens? No one's going to look at the dashboard, because no one has the bandwidth.
A couple of years ago, one of the banks I worked for rolled out their generative AI. When you work in a bank, you have the private side and the public side. The private side are people that have access to material, non-public information. If I had entered that data before it was announced into the generative AI tool, someone on the public side can ask, "What are we doing with the government?" And all of a sudden they've got this insider information. That creates a risk, and that creates a regulatory problem. So how are you securing that part of it? So how are you securing the data?
A year ago I would talk about agents as non-human identities. Today, I don't do that. My concept of that has changed over the last couple of years. There's another opinion I'll give: there are three types of AI. There's agentic AI with LLMs for agents. There's AI with machine learning. And then there's a SaaS tool with an AI wrapper around it, so companies can say, "Oh, I have AI," and it's really just a SaaS tool. We as CISOs, as technology professionals, need to ask those questions. A lot of times you’re talking to a salesperson who doesn't have that answer.
And then the second question I asked is “Who has access to my data?” Who can see it, who can search it.
You started in banking and are now CIO at a manufacturing company. How different are those two worlds when it comes to security culture, and what has that transition taught you about where the real gaps are?
A lot. When you spend 23 years in a highly regulated environment, you think about everything you can think about, which includes the risks and the controls and all of those things. Now, moving into a manufacturing space, we're not regulated, nowhere near what the banks are. The idea of security is still new. It's necessary, and it's understood that it's necessary, but we just don't want to spend a lot of money on it.
When I started, I made some very bold statements because of where I come from, and my peers were like, "You can't do that, and here's why." What I realized very quickly is that I can make suggestions, but I have to listen, and I have to understand. What are the ramifications? What's the impact on leadership? What's the impact on the workforce? My process is a lot more meaningful than it was when it started, because I can actually be thoughtful about it.
If you were advising a CISO that their organization is deploying autonomous agents, what are the first three things you'd tell them to do?
What controls do you have in place? How do you know what the agents are doing, who's monitoring it, and what is the agent supposed to do? Are you sure it's doing what it's supposed to do?
Similarly: how are you protecting it from outside influence or inside influence? Do you have visibility into the agent staying within its guardrails, or is it figuring out ways around them?
And if I have more than one agent running, I need to ensure that not only are there guardrails, but in some cases there's a bubble around it, so it can't talk to the other agent.
I'm going to get my plug here for Tenet. Barak and Nevo’s concept has evolved, and I'm so grateful I got to experience that evolution into something that ensures you have that visibility, that you can see what's happening and then you can act on it. A lot of companies are saying, "Oh, I'm going to show you your agents," but they're not telling you anything. All they're telling you is, "I have an agent." With Tenet, you're telling me I have an agent, and you're telling me what it's doing, even if it's doing something it shouldn't be doing. And if I need to insert a human in the loop at some point, I have the ability to do that. That's one of the big things for me: this is something people don't know they need until they need it. If you're deploying agents, you need this. You need to understand what your agents are doing.
If you're deploying agents and you're just letting them go, it's like you've just let a group of employees run free. You've opened it up, you've given them access to everything, and all of a sudden they can do anything they want to do.
And if you don't have security at the table, no one's going to make those statements. You want security to be a tool to protect, not a hindrance to develop and innovate. You don't want to just say no. You have to say yes. "Let's talk about what you're building and how we can ensure that it's built in the most secure way."
Thanks to Rick for opening The Agentic Edge with so much wisdom and experience.
Are you a security leader seeing the agentic front lines up close? We'd love to feature you. Reach out!






