AI or Die: Where Human Authority Must Ultimately Sit

Contents

Get a Personalized Demo

See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster.

Request a Demo

John White is the Field CISO for EMEA at Torq. A respected security executive with more than 20 years of leadership experience, John previously served as CISO at Virgin Atlantic, where he led a multi-year transformation deploying the Torq AI SOC Platform to modernize cyber operations. Prior to that, he built and transformed security functions for global organizations, including ASOS, Liberty Global, AEG Europe, and KPMG.

There’s a growing acceptance that AI is no longer optional in security. That battle is largely won. The more interesting question — and the one I keep getting asked — is what we actually believe AI should be responsible for, and where human authority must ultimately sit.

It’s a governance question. And right now, most organizations are getting it wrong.

Not because they’re being reckless. But because they’re thinking about AI governance the same way they thought about governing ChatGPT usage: as a risk to be managed rather than a capability to be designed. 

That’s the wrong frame entirely. 

Especially as technologies like Model Context Protocol (MCP) — the mechanism by which AI models communicate with each other — start to reshape the landscape in ways most governance frameworks aren’t remotely equipped to handle.

So let me share how I think about this. Where AI can and should own the work. Where humans must stay in the loop. And what a governance model that’s actually fit for purpose looks like in 2026.

The Accountability Gap: Extreme Ownership Starts With the CISO 

Let me start with the question I get asked more than any other: If AI makes the wrong call and a breach happens, who’s accountable?

The answer is straightforward, even if it’s uncomfortable: the CISO.

It’s no different from recruiting a senior analyst you believed in, and they make a catastrophic mistake. The analyst may be at fault — but your head is on the block. 

AI is the same. The CISO’s responsibility is to validate the technology, validate the approach, test the effectiveness, test the outcomes, and play in that judgment space in a safe environment before letting it anywhere near the enterprise. Then go through every step to de-risk it as much as possible. That accountability doesn’t transfer to the vendor. It doesn’t transfer to the board. It sits with you.

It’s a mindset Navy SEALs Jocko Willink and Leif Babin captured perfectly with the concept of Extreme Ownership — the idea that leaders must take full responsibility for everything in their world, including failure, with no excuses and no ego. 

It’s one of the core values at Torq, and honestly, it’s a big part of why the culture resonated with me when I joined. Because this is exactly how I’ve always approached security leadership. You don’t get to point at the AI. You don’t get to point at the vendor. You own it.

And once you accept that, the whole question of where to draw the governance line becomes a lot clearer.

What AI Should Own, What It Should Inform, and What Stays Human 

I think about this in terms of the three-layer model I outlined in the first piece in this series: Outcome, Judgment, and Execution. 

In that model, the execution layer is where AI and automation operate — continuously, consistently, at machine speed, within predefined guardrails. This is where AI earns its keep in the AI SOC: Repeatable, rules-based, high-volume work. Tier 1 triage. Alert enrichment. Containment actions that are reversible, well-understood, and within clearly defined boundaries.

The judgment layer is where humans must stay in the loop. This is where I draw the line — and it’s not an arbitrary one. The decisions that require human authority are the ones that demand business context. Risk appetite. The political environment you’re operating in. The company’s financial situation. The strategic direction the board is pursuing this quarter.

No matter how well-trained an AI agent is, no matter how much historical incident data it can pull from, it will never have its finger on the pulse of all of that. You could add it to the knowledge base — but full contextual judgment isn’t something you can upload. That’s where humans must sit.

The outcome layer is where the strategic intent lives. This is entirely human. What are we trying to protect? What does success look like? How do we measure it? AI can inform this layer — surface patterns, highlight gaps, accelerate analysis — but it cannot define it.

The more capable AI becomes, the more important it is to be precise about where human authority is non-negotiable.

AI Trust Isn’t Given. It’s Earned. 

One of the most common mistakes I see is organizations trying to go too fast, too soon. They see the potential, they’re under pressure to deliver results, and they push AI into complex, high-stakes decisions before they’ve built the foundation of trust that those decisions require.

Here’s how I think about the right sequence for building trust with AI: least critical to most critical, least complex to most complex.

Start with lower-level, repeatable tasks. Build workflows. Run them. Review the outcomes. Ask the honest question: did the workflow you just built actually achieve the outcome you wanted? If yes, take the learning and move further along the stack. If not, go back through the process, improve it, and run it again.

It’s a continuous improvement loop — the objective is to build trust incrementally as you go. And it’s the only approach that’s actually sustainable.

Think about how trust works — with a new colleague, a new friend, a new direct report. It’s never given. It’s earned through consistent actions that match intent. You start small, observe, and expand as the track record develops. And when something doesn’t go as planned, you use it to recalibrate, not give up.

Building trust with AI is no different. The actions the system takes are a direct reflection of the foundations and boundaries you built: the workflows you designed, the guardrails you set, the outcomes you defined. If it’s producing the right results, that’s your foundation holding. If it isn’t, that’s the feedback loop telling you to go back and rebuild before you go further.

You Can See Automation. You Have to Trust AI.

The apprehension around AI in SecOps is significantly higher than the apprehension around traditional security automation, and for good reason. With automation, the input-output relationship is transparent. With AI — particularly agentic AI — the system is making a learned judgment about what should happen next. That’s a fundamentally different kind of relationship to build.

To get comfortable with AI, CISOs need to go back to the basic building blocks. Understand how decisions are being made. Understand what guardrails are in place. Understand what the boundaries are. And then expand them deliberately, as the evidence builds. Just like you would with anyone new you’re learning to trust.

What Governance Actually Needs to Cover

Most governance models being applied to AI right now were designed to manage GenAI usage — the “who’s using ChatGPT” era of governance. They’re not built for governing AI within security tooling itself. And they’re certainly not built for what’s coming next with MCP, where AI models are communicating with each other in ways that create entirely new chains of decision-making and action.

When I think about a governance model that’s actually fit for purpose, I see three dimensions:

  1. The people dimension treats AI as you would a new employee. What decisions is it authorized to make? What requires escalation? What is it never permitted to do? These aren’t technical questions. They’re policy questions, and they need to be answered at the organizational level — not by the security team in isolation.
  2. The legal dimension covers data processing, how AI interacts with sensitive information throughout the company, and how its usage is documented for regulatory purposes. This isn’t just a security problem. Legal needs a seat at this table.
  3. The technology dimension covers what technology you’re using, how you’re using it, and the integrity of the system. This is where the security and technical teams lead — validating the platform, the architecture, the integrations, and the guardrails.

None of these dimensions operate in isolation. The day-to-day governance can sit with the security and GRC teams. But the policy has to be organizational. It has to be holistic. Enforcing it comes down to the technical teams, but owning it requires the whole organization to be aligned.

And this isn’t a new role. It’s an existing role that is adapting. The people responsible for policy today need to develop new skills, understand the new technology, and update their frameworks accordingly. The answer isn’t to hire a Chief AI Governance Officer and call it done. The answer is to build the capability into the teams you already have.

When Security Gets It Right, the Whole Org Catches Up

Here’s something I’ve noticed consistently: once adjacent teams see the outcomes security is delivering with AI and automation, they want in.

GRC is the most natural next step. Identity and access management. IT operations. Any function that involves repeatable processes, assurance activity, or continuous monitoring stands to gain significantly. The model translates directly.

And that’s actually one of the most compelling arguments for security teams to lead the initiative on AI advancements. 

When security builds a working model — an outcome layer, a judgment layer, an execution layer that actually delivers — it becomes a common language the wider organization can adopt. 

Security becomes the team that figured it out first. Everyone else becomes a customer of that thinking.

And maybe the most exciting possibility? A real-time CISO-level SOC dashboard that reflects actual organizational risk posture as it stands right now, not as it stood at last quarter’s reporting cycle. CISOs being able to finally see everything has been the holy grail for years. 

With AI doing the continuous monitoring, the continuous enrichment, the continuous assessment, we might finally be close to it.

The One Place Humans Will Always Sit

I want to be direct about this, because I think it gets obscured in the excitement around AI’s capabilities.

The most complex investigations will always require a human in the loop. 

Not because AI can’t process the data. It can process more data, faster, than any human team. But the decision that comes out of that investigation isn’t solely a data decision — it’s a judgment call that requires knowing the business, the risk appetite, the stakeholders, and what’s politically viable right now. That judgment doesn’t sit in a knowledge base. It lives in the people who’ve built relationships across the organization, who’ve sat in the board meetings, who understand the strategy, the pressures, and the history. 

AI can inform that judgment. It can surface the evidence, structure the analysis, and highlight the options. But the call? That’s human. That stays human.

The organizations that design their AI governance around this principle — AI at machine speed in the execution layer, human authority at the points where it genuinely matters — will be the ones that build something sustainable.

The organizations that sacrifice that line for a quick fix of speed or efficiency will find out exactly why it mattered in the first place — and not at a moment of their choosing.

And that moment will come.

Machine speed where it counts. Human authority where it matters. Get the AI or Die Manifesto and start building.

Read the rest of John’s blog series about AI in the SOC: The AI SOC Org Chart for 2026 and Beyond

SEE TORQ IN ACTION

Ready to automate everything?

“Torq takes the vision that’s in your head and actually puts it on paper and into practice.”

Corey Kaemming, Senior Director of InfoSec

“Torq HyperSOC offers unprecedented protection and drives extraordinary efficiency for RSM and our customers.”

Todd Willoughby, Director

Compuquip logo in white

“Torq saves hundreds of hours a month on analysis. Alert fatigue is a thing of the past.”

Phillip Tarrant, SOC Technical Manager

Fiverr logo in black

“The only limit Torq has is people’s imaginations.”

Gai Hanochi, VP Business Technologies

Carvana logo in black

“Torq Agentic AI now handles 100% of Carvana’s Tier-1 security alerts.”

Dina Mathers, CISO

Riskified logo in white

“Torq has transformed efficiency for all five of my security teams and enabled them to focus on much more high-value strategic work.”

Yossi Yeshua, CISO