
Artificial Intelligence (AI) is often promoted as the silver bullet for the overwhelmed Security Operations Center (SOC). With alert volumes surging and attacker dwell times shrinking, the appeal of an autonomous, all-knowing AI system becomes stronger. Vendors promise incredible results, AI that can instantly improve your security posture.
But reality is more complex. AI can be transformative. The answer is yes, but only when implemented with discipline and clarity. Poorly deployed AI amplifies noise, embeds bias at machine speed, and erodes trust between the SOC and the business. The line between game-changing progress and catastrophic risk is razor-thin. AI is an accelerant, and not a miracle worker. This blog summarizes key insights from Dr. Chase Cunningham’s white paper, “Artificial Intelligence in Analytics & SIEM: A Field Guide,” providing a practical roadmap for incorporating AI into your security strategy, not as a budget item, but as a justified, effective capability.
The most essential truth: AI doesn’t fix broken systems—it magnifies them. If your telemetry is incomplete, detection rules are noisy, and workflows are inconsistent, AI will scale those weaknesses. It will produce flawed conclusions faster and more confidently.
Conversely, with high-quality data, clear detection logic, and disciplined processes, AI becomes a force multiplier. It enables your team’s expertise to scale, improving precision and recall across the board.
The thesis is simple: AI is an accelerant. It will magnify the quality of your data, the clarity of your detection content, and the rigor of your processes. If those inputs are weak, AI will magnify that weakness. If they’re strong, AI will finally let them scale.
Key takeaway from the whitepaper: AI success starts with strong inputs. Before accelerating, build a solid vehicle and a clear road. Measure progress in quarters, not weeks, and demand proof, not just demos.
In the modern enterprise, identity is the new control plane. With infrastructure spanning on-premise, cloud, and SaaS applications, identity has become the backbone of correlation for nearly every significant attack. Adversaries understand this; credential abuse remains a leading attack vector, and many breaches originate through third-party access (30% according to the DBIR 2025).
But identity is no longer limited to human users. Increasingly, non-human identities such as service accounts, APIs, and AI systems are being deployed across environments to automate tasks, make decisions, and interact with sensitive data. These entities often operate with elevated privileges and minimal oversight, making them attractive targets for adversaries and potential sources of insider risk. If compromised or misconfigured, they can act as silent insiders, moving laterally and exfiltrating data without triggering traditional alerts. Therefore, any AI analytics engine that is not thoroughly focused on identity—users, devices, service principals—is effectively flying blind. To identify and assess modern attacks, your AI must ingest and analyze identity-aware telemetry from sources such as your Identity Provider (IdP), MFA systems, and cloud control planes. This data is essential.
Dr. ZeroTrust’s line in the sand: Identity is the new control plane. If your AI can’t “see” IdP decisions, token use, and privilege context, you’re optimizing downstream of the real problem. This main change shifts the traditional log collection approach. It’s not about gathering everything; it’s about focusing on high-ROI data sources that provide strong identity context. Without this, your AI will see disconnected events rather than a clear story of a compromise.
One of the leading causes of AI fatigue is the disconnect between the marketing hype of an autonomous SOC and the reality of assistive AI. The real, tangible value of AI today is its role as a mighty co-pilot for human analysts, enhancing their judgment and reducing workload, not replacing them entirely.
The current, high-value use cases for AI in the SOC are focused and practical:
In each of these applications, human judgment remains essential for forming hypotheses, setting priorities, and making final responses. The purpose of AI is to reduce the time to first decision and improve analyst consistency, allowing skilled responders to focus on tasks that require human thinking.
Here’s a counterintuitive but vital truth: an AI model operating within your SOC isn’t just another tool. It is, by definition, a privileged system. It interacts with your most sensitive logs, influences triage and investigation, and may even be connected to response actions. Its access and influence require strict oversight.
This means that AI governance, as outlined in frameworks like the NIST AI Risk Management Framework (RMF), is not just bureaucratic overhead; it is a crucial safety measure. As a recent IBM report found, “shadow AI and weak access controls drive risk.” Without governance, you cannot guarantee that your AI won’t amplify noise, reinforce bias, or undermine the trust you are working to establish.
Dr. ZeroTrust guidance: Treat models as privileged systems. They require identity, access, change control, and auditing, no exceptions.
Adopting this governance mindset is crucial. It encourages you to ask the right questions: Who can query this model? What data does it process? How are its outputs checked? How do we handle changes to its logic? Answering these questions helps keep your AI a trusted, reliable asset rather than an opaque liability that fuels bias and undermines the trust you’re trying to establish.
Conclusion: The real value of AI in security doesn’t come from hype but through discipline. It depends on consistently focusing on the basics: high-quality data, detailed identity context, practical tools that help people, and strict governance that recognizes AI as an essential system. When you get these four areas right, AI shifts from promising talk to a true force multiplier, enabling your team to make decisions faster, more consistently, and more defensively. Before choosing which AI tool to purchase, the key question for any security leader today is: Is our data and discipline prepared for the boost?
For a deeper dive into these principles and actionable guidance, download the white paper “Artificial Intelligence in Analytics & SIEM: A Field Guide”.