Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

All of my posts are free and exist without a paywall. If you enjoy my writing and would like to support me, please consider buying a paid subscription:
I won’t recap RSA, but it was clear that AI dominated the conversation. Compared to last year’s focus on existential fears and hype, this year’s tone was more grounded. Security professionals seemed more ready to embrace AI with a mix of curiosity and caution. That’s a promising shift.
In my recent writing, two themes around AI and security have consistently surfaced: first, the increasing adoption of AI in security tools and workflows; and second, the growing need to rethink change management as these tools become more powerful and autonomous. Both are worth unpacking.
I’ve long believed that security practitioners should understand the systems they’re defending—not just conceptually, but technically. You can’t protect what you don’t understand. That’s a long-standing problem in appsec, where many teams are excellent at identifying flaws but have never actually built or shipped high-quality software. Cloud security evolved differently, arguably more successfully, because it grew out of DevOps, where the emphasis was on building and running systems, not just securing them.
AI is now revealing similar gaps.
A recent LinkedIn post by Christofer Hoff (CTO/CSO of LastPass) touched on this. His view: security leaders and practitioners need technical depth in emerging technologies like AI, especially since the risk abstractions we rely on in other domains haven’t solidified yet. I largely agree.
Think of it this way: most security pros don’t need to know how an OS scheduler works because we have containers, compilers, and frameworks that abstract away the complexity. However, they used to need to know back in the day, and similarly, AI doesn’t have that same mature layer of safety abstractions yet. The risks live closer to the surface.
I differ from Christofer slightly on what “technical depth” means. Yes, security leaders should probably know what tokenization is, how a dot product works, and what supervised learning involves. These are foundational ideas easily within reach and not that technically difficult. But unless you’re building your own models, understanding attention mechanisms in detail may not be necessary. What is necessary is having enough depth to ask good questions and challenge poor assumptions.
Which brings us to the proverbial “Bob”, a stand-in for security leaders who makes technically incorrect claims about AI and is far removed from the problem space. Is that bad leadership? Maybe not. But it does introduce risk when leaders make decisions based on second-hand summaries instead of staying engaged with how the tech actually works, especially in AI, where even small misunderstandings can have major downstream consequences. The right (or rather more nuanced) response for “Bob” is to say that the people with the most context make important technical decisions. To do this, he should make sure that the right people are in the room when the decisions are made, rather than saying he doesn’t need to understand what’s going on because he is focused on the outcomes.
We don’t yet have the tools or abstractions that allow non-specialists to work safely with AI systems. That means security leaders and practitioners need to be vigilant in understanding how AI is used in their organization (and if it’s necessary).
Change Management Is the Real Challenge
Even if we get technical depth right, there’s still the question of how organizations manage the speed and complexity that AI introduces.
This hit home for me when I saw a LinkedIn post by Isaac Evans (CEO of Semgrep) about “vibe coding,” the trend where developers move faster and rely more on intuition or AI-driven tools rather than strict, manual review processes. While the post was funny, the underlying issue isn’t: AI increases the volume and velocity of code changes, and we haven’t fully adapted our change management strategies to keep up.
This isn’t the first time we’ve faced this problem. Cloud and agile development introduced similar challenges, replacing a few releases per year with dozens per day. The security industry responded with new tools (like Semgrep and Snyk) and new practices (like automated CI/CD checks). That same transition is now playing out in the AI space, but faster, and with fuzzier boundaries between author and tool.
Volume is the problem. The more code that gets written, the higher the likelihood of mistakes and risk. Without a modern approach to monitoring and validation, AI-assisted development will outpace our ability to secure it.
Rethinking the Operating Model
So, what should change management look like in the AI era?
At a minimum, we need more observability around AI systems. That means logging prompts and outputs, setting up automated rules to detect anomalies, and alerting humans when things look off. For internal models, we should be benchmarking performance and safety against test suites, not just once, but continuously. These are ideas borrowed from existing security practices, but they’re not yet widely applied to AI workflows.
More philosophically, I come back to the idea that AI is a tool for generating possible solutions, but verifying them is still a human task. Verification might become easier than creation, and that shifts where our security focus should be. (And for the theoretical computer science folks out there: yes, I believe P ≠ NP.)
Interestingly, that might also lead to better software. If we increase the emphasis on testing and verification because we don’t trust the generative process as much, then we may end up with more resilient products, not fewer. AI becomes a form of advanced automation, not a shortcut. Think of the car industry: more automation, more quality control, fewer errors (usually).
Real-World Examples and Emerging Trends
To ground this discussion, let’s look at some recent developments:
-
AI-Powered Threat Detection: Companies like CrowdStrike and Palo Alto Networks are integrating AI to enhance threat detection and response. For instance, CrowdStrike’s Falcon platform uses AI to identify and mitigate threats in real-time, reducing the burden on security teams.
-
Deepfake Scams: The rise of generative AI has led to sophisticated scams. A notable case involved a $25 million fraud in Hong Kong using deepfaked video calls. This highlights the need for advanced verification methods and AI-driven detection tools.
-
Investor Pressure: A KPMG survey revealed that investor demands for AI deployment have risen sharply from 68% in late 2024 to 90% in early 2025. This surge is prompting executives to accelerate AI initiatives, sometimes without comprehensive strategies, leading to potential risks.
These examples underscore the dual-edged nature of AI in cybersecurity, offering powerful tools for defense while introducing new vulnerabilities.
Where This Leaves Us
AI in security isn’t a passing trend. It’s here, and the conversation is finally shifting from “should we use it?” to “how do we use it responsibly?” That’s progress.
But to keep pace, we need more than adoption — we need understanding. We need leaders who aren’t afraid to get a bit technical, teams that are challenged to think critically about risk, and new models of oversight that reflect the speed and complexity of modern development.
We’ve done this before with cloud, with agile, with DevOps. We’ll do it again with AI. But it’s going to require us to evolve, not just our tools, but our mindset.