Contents
Get a Personalized Demo
See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster.
I recently sat between two people who think about the AI SOC operations from completely different angles — and spent 50 minutes watching them land in the same place.
Leonid Belkind builds the technology. He co-founded Torq, serves as CTO, and spends his days translating between the market, our customers, and the engineers who build the product. John White spent 20 years on the operational side, most recently as CISO at Virgin Atlantic, where he deployed Torq before crossing over to become our Field CISO. When Leonid talks about what agentic AI can do, John talks about what happened when he actually turned it on with half the headcount he needed.
What I expected was a technology discussion. What I got was a conversation about fear, trust, speed, and why the next six to nine months might be the most important window security leaders have ever faced.
Their thesis: the window to deploy agentic AI in the SOC before machine-speed attacks become the norm is roughly six to nine months. The teams that start now — even on a small scale — will be the ones that thrive. The teams that wait will be the ones that get hit.
Here’s the full recording if you want the unfiltered version. But these are the moments that stuck with me.
The Threat Landscape Has Shifted. AI SOC Operations Haven’t Caught Up.
The conversation started where every SOC conversation starts right now: attackers are moving faster than defenders, and the gap is widening.
Leonid brought up VoidLink, a malware framework that compressed months of attack development into days. But the point wasn’t VoidLink specifically. It was what VoidLink represents. Malicious actors don’t sit through vendor evaluations. They don’t need compliance sign-off or procurement cycles. They grab what’s available and move. Tools that required state-sponsored resources a few years ago are accessible to anyone now.
“The phrase ‘bringing a knife to a gunfight’ hasn’t come from nowhere,” Leonid said. “This thing is happening. If you’re not there, you’re just so ill-equipped to face the challenges it poses.”
That set the tone for everything that followed. Because if the threat landscape has fundamentally shifted — and both of them believe it has — then every stage of AI SOC operations needs to shift with it.
“We certainly can’t use traditional methods as CISOs to address a new risk. That’s the definition of insanity: trying to do the same thing to get a different outcome.”
– John White, Field CISO at Torq
His read: VoidLink isn’t an outlier. It’s just the start.
Triage: The Easiest Win and the Most Overdue
When we moved into the threat lifecycle, Leonid made the case that triage is the most obvious place to start and the place where delay is least defensible.
His reasoning was straightforward. Triage sits at the top of the funnel, facing the highest volume of incoming signals. Detection systems often lack context. Waiting for perfect fidelity means being too late. And the humans doing this work? They’re not great at it. Not because they lack skill, but because the job demands consistency and speed at a scale humans physically can’t sustain.
“Bob, you’re wonderful,” he told me, “but if I give you 1,000 assignments at the same second, no matter how wonderful you are, that’s not your best quality.” Fair point.
Agentic AI doesn’t get decision fatigue. It doesn’t take breaks. It handles non-uniform data and drives toward outcomes without someone having to write a playbook for every scenario. In Leonid’s view, triage was overdue for automation before agentic AI even existed. Now there’s genuinely no excuse.
John brought the human angle. The first thing he sees when AI handles triage is happier staff. “From a CISO’s perspective [when AI for triage is deployed], when you look out at your team, they don’t seem overwhelmed. They’ve got much more time to apply a quality approach.” He emphasized that analysts aren’t unhappy because they dislike security; they’re unhappy because they’re not doing security work. They’re drowning in noise instead of solving problems.
The shift from reactive to proactive is only possible when analysts aren’t buried. “There’s nothing worse than an overwhelmed team trying their best but still not being able to achieve the outcomes they want.”
The takeaway: If you’re not automating triage yet, this is where to start. The risk is low, the ROI is immediate, and the analyst experience improvement alone justifies the investment.
Investigation: The Glass Ceiling Has Broken
Investigation is where the conversation really got interesting and where both speakers argued the market has underestimated how far agentic AI has come.
Leonid drew a parallel to software engineering. A year ago, copilots suggested code. Now tools like Cursor refactor entire applications. A similar leap has happened in security investigation.
“You as a human should be the copilot,” he said. “The copilot in a real flight is the person supposed to be fresh, up for it, there for escalation scenarios.” AI handles the evidence gathering, enrichment, correlation, and even inference — drawing conclusions, making risk scores, assembling timelines. The analyst steps in for judgment, not grunt work.
He shared a compelling example. Torq’s Director of Strategy — a former head of security operations at a regulated enterprise — tested an investigation exercise he used to give Tier 2 analyst candidates. Human analysts typically took half a day across multiple tools to produce findings with full evidence and timelines. An autonomous AI investigation, crunching the same hundreds of thousands of logs, completed it in under 6 minutes, producing more detailed findings than humans typically produce. Same data, same exercise, apples to apples. Leonid called it “an Archimedes ‘eureka’ moment.”
John focused on what pre-built cases mean operationally. When an analyst receives a case that’s already enriched and contextualized, two things happen: they move faster and with less bias. “In the SOC, having done the role for a long time, you start to build up preconceived ideas of what things look like. The advantage of having AI do that for you is that it’s unbiased.”
He tied it back to his exposure window framework — the time during which attackers operate. “If you can reduce or even remove that exposure window, you’re going to mitigate the threat pretty quickly. You’ve got one answer, one thing you can trust, a definitive way forward, and then you can move into action.”
The takeaway: Investigation is no longer a “human-only” phase. The teams treating it that way are operating with a capability gap that widens every month. Agentic AI doesn’t replace analyst judgment; it gives analysts something worth judging, in minutes instead of hours.
Response: Where AI SOC Operations Get Uncomfortable — and Where They Matter Most
The response phase was the most charged part of the conversation, and the part that makes or breaks the entire AI SOC argument. Because if you speed up triage and investigation but leave response at human speed, your AI SOC operations haven’t closed the loop.
Leonid didn’t mince words: “Many founders start their pitch by saying, ‘Put it in detect-only mode, and then as you gain confidence…’ But as a founder of a security operations company, if you haven’t responded, at best you haven’t done much.”
His argument: leaving containment actions — quarantining endpoints, blocking network traffic, suspending identities — to human speed during active exploitation means deeper organizational exposure. The barrier isn’t technological. It’s psychological. And it cuts both ways: “Are humans 100% trustworthy? They don’t have lapses in judgment? They don’t accidentally push the wrong button?”
John balanced this with practical reality. CISOs are comfortable with automated triage and investigation. Response is where they hesitate and that hesitation is risk-based, not irrational. The answer isn’t to leap blindly. It’s to start small.
At Virgin Atlantic, John never had abundant resources. The operation was 24/7/365, safety-first. He couldn’t afford human lag. So when deploying Torq in his SOC, he started with a handful of use cases, built trust with the team, and expanded from there. “Within the first four or five use cases, starting small, I was still saving 40 hours a week within the team. That’s a whole analyst’s working week.”
His advice: “Start small, build the trust, and then take AI through the tiers. The more you speculate, the more you accumulate.”
The takeaway: Automated response is where the value compounds but it requires earned trust, not blind faith. Start with low-risk containment actions, prove the guardrails work, and expand. The teams that never start are the ones carrying the most risk.
The SOC That Learns Over Time and the Teams That Restructure Around It
The final section went over the future of the SOC as an organization. Leonid went deep on how AI agents actually learn: semantic knowledge (facts about your environment), procedural knowledge (how things get done), and episodic knowledge (memories of what worked and what didn’t). Each maps to a specific AI technique — from in-context learning for environmental awareness, to reflective prompt evolution for refining procedures, to methods like LoRA for deeper model adaptation. The key insight: most AI learning in security operations happens without retraining the model.
John took the strategic view. Looking back at 2025’s high-profile attacks, detection wasn’t the failure — the gap between detection and action was. AI attackers set an intent and let the model figure out the how, making them unpredictable in ways that static defenses can’t match.
His vision for the AI SOC in 2026 goes beyond technology.
“AI doesn’t just change technology. It’s going to change the way security teams work — how we structure teams, the roles we assign, the execution we give up to AI so we can concentrate on designing outcomes and judging performance.”
– John White, Field CISO at Torq
He introduced the concept of the agentic workforce — taking existing analyst roles (a vulnerability management analyst, for example), mapping the tools and processes they use, and gathering them into an agentic persona. Not replacing the human. Redefining what the human does.
“CISOs should be expecting constant and consistent delivery. That’s what AI brings. You don’t have to wait for someone to turn up to work.”
One moment that stuck: a Torq customer told John he “got his Christmas back” because automation changed the team’s shift patterns. Escalations still come to humans out of hours but the first phases run at machine speed regardless of who’s on shift.
The takeaway: The AI SOC doesn’t just change your technology. It changes your org chart, your shift patterns, your hiring profile, and what “analyst” means. The teams thinking about this now will adapt. The teams that aren’t will be restructuring reactively after the next major incident.
The AI SOC Operations Playbook: The Window Is Closing
John closed with urgency. “Don’t fear AI. Embrace AI. At the moment, there is still the opportunity to get ahead of the curve, but that window is closing. I’d say we have maybe 6 to 9 months before machine-speed attacks really start becoming commonplace. Those who have adopted an agentic approach will thrive. Those that haven’t — they’re going to be the companies that get hit.”
Leonid’s closing was equally direct. Responsible adoption is possible. The guardrails exist. The industry learnings are sufficient. The only remaining question is whether you act on it.
Here’s the practical path both speakers laid out for transforming AI SOC operations:
- Start with triage. Lowest risk, highest volume, most immediate ROI. Get analysts out of the noise.
- Expand into investigation. Let AI build the case. Let analysts make the call. Compress the exposure window from hours to minutes.
- Earn your way into response. Start with low-risk containment actions. Build trust. Expand the scope as confidence grows. Don’t skip this step.
- Think beyond technology. Start designing agentic roles. Map existing analyst workflows to agent personas. The org structure that works in 2026 isn’t the one you have today.
“[With AI in the SOC], we can’t wait for perfect,” John said. “It’s going to be ever-evolving. The most important step is just to get on the journey.”




