New for 2026
The AI challenge in security: probabilistic vs deterministic
Artificial intelligence has introduced a fundamental tension into how organisations approach security - and there's a growing gap between the way AI systems behave and the way traditional security controls are designed to work.
How traditional security controls think: deterministic logic
Conventional security is built on deterministic logic. A firewall rule either permits or blocks a connection. An EDR agent either detects a known signature or does not. A vulnerability scanner either finds a CVE or reports clean. These controls produce consistent, repeatable, auditable outputs. Given the same input, they return the same result. That predictability is their greatest strength - and, increasingly, their greatest limitation.
Deterministic controls are designed for a world where threats behave consistently and can be catalogued. But AI-powered attacks are not consistent. They are adaptive, contextual, and designed specifically to evade rule-based detection. Over 80% of phishing emails identified in late 2024 and early 2025 involved some form of AI assistance. These messages bypass traditional filters not because the filters failed, but because the attacks were generated to be indistinguishable from legitimate communications at the rule-matching layer.
AI introduces too much speed and autonomy for surface-level controls to keep up.
How AI-powered threats think: probabilistic behavior
AI-driven threats and AI-assisted workflows operate probabilistically. They do not follow fixed paths. They sample from distributions of possible actions, adapt to context, and produce outputs that vary even when given similar inputs. A language model crafting a phishing email does not produce the same email twice. An AI agent navigating a corporate network does not follow a predictable attack path. This non-determinism is precisely what makes AI-powered attacks difficult to detect with rule-based controls.
The same challenge applies to AI inside the enterprise. Agentic AI — systems that take autonomous action on behalf of users — is now in use at 67% of organisations. AI agents can provision access, move data, generate content, and make decisions without human approval at each step. As 2026 analysis from Eran Barak concluded, "AI introduces too much speed and autonomy for surface-level controls to keep up."
The result is what security analysts are now calling the "AI security gap": organizations are deploying AI at a pace that outstrips their ability to govern it. Only 6% of organizations have an advanced AI security strategy, yet 93% are using platform-based security. The tools exist. The visibility does not.
The core problem for security controls in an AI world
Deterministic controls answer a binary question, essentially "did this specific thing happen?" Probabilistic AI asks a different kind of question: what is the current state of trust across my entire estate, given all the signals I can observe? Traditional controls cannot answer that second question. They were not built to. CCM — and specifically AI-native CCM — is built precisely to answer it: not by replacing deterministic controls, but by providing the intelligence layer that tells you whether all of those controls are actually working, whether their coverage is complete, and where the compound risks that sit between them are silently accumulating.
Why AI-native CCM is the answer to both sides of the problem
The answer to AI-powered threats is not to abandon deterministic controls. A patched system is a patched system. An endpoint with EDR deployed is genuinely more protected than one without. The answer is to ensure that the deterministic controls you have are actually deployed, configured, and working — and then to layer AI intelligence on top to identify the patterns and compound risks that human analysts and manual processes cannot see at scale.
This is exactly what Panaseer's IQ Suite does. MetricIQ uses AI to triage your most critical metric gaps, automatically surfacing what matters most across 250+ out-of-the-box metrics. Key Drivers applies probabilistic analysis to explain why metrics are changing — not just that they changed, but which factors are driving the movement and what to do next. The result is a security posture that is both rigorously validated (deterministic confirmation that controls are deployed and working) and intelligently prioritised (probabilistic AI identifying where hidden risk is accumulating).