Rapid AI transitions are the new normal
What two cybersecurity developments tell us about the AI upheaval ahead.
In “We’re all thinking about AI wrong” last month, I wrote about how AI continues to befuddle us with its jagged brilliance, where it can be phenomenal at some things yet absolutely dismal at others.
What's more, even within a single field, the results can vary wildly. For instance, AI can craft great headlines or passable press releases but fall completely flat when it comes to a good opinion piece. And where a commercial lawyer might sing praises about how AI could replace the work of multiple legal assistants, a veteran criminal defense lawyer would avoid it like the plague, given the devastating consequences of a single hallucination.
This week, two contrasting developments in cybersecurity put this into sharp relief. If nothing else, they underscored that rapid AI-driven transitions are becoming the new normal, and no industry or role will be spared the upheaval.
When AI is both the problem and the solution
Earlier this week, a man tinkering with Claude Code to hack his own DJI Romo robot vacuum (he wanted to control it using his PS5 controller) inadvertently stumbled upon a serious security flaw. How bad? The bug gave him control over thousands of other robot vacuums using nothing more than his personal login credentials.
Think about it. If even a commercial product from a well-known manufacturer could contain such security oversights, what about vibe-coded apps created by non-technical users with little regard for best practices? That’s a perennial concern of mine. The average code out there is already insecure, and that's what AI systems are being trained on. Are we heading towards a vibe-coded security apocalypse?
Don’t just take my word for it. A study by Escape Research published in October 2025 analysed over 5,600 publicly available vibe-coded apps (think Lovable, Base44, Vibe Studio, Create.xyz, and Bolt.new) and found over 60% were vulnerable, with 98 highly critical issues identified.
Stop all AI use in code, then? It isn’t that simple. Jagged brilliance, remember? Just yesterday, Anthropic unveiled Claude Code Security, a new capability built into Claude Code for the web that scans codebases for vulnerabilities and suggests targeted patches.
And the results speak for themselves. Anthropic said its internal team has already found over 500 vulnerabilities in production open-source codebases using its latest AI model, Opus 4.6. These are bugs in mature, well-established products that had gone undetected for decades, overlooked despite multiple rounds of expert review.
Brace for a protracted period of upheaval
It’s worth noting that cyber stocks promptly tumbled after the Anthropic announcement. This seems misplaced, as the static application security testing (SAST) that Claude Code Security offers is quite different from cybersecurity offerings such as endpoint security, threat intelligence, identity management, or secure connectivity, to name just a few.
Yet markets rarely parse such distinctions in the heat of the moment. The sell-off likely reflects a broader anxiety that AI could eventually encroach on any segment of the cybersecurity value chain, regardless of where it starts. Today it's SAST. Tomorrow it might be threat detection or identity management.
Perhaps we all know intuitively that AI is set to reshape the way we work and do business, even though most organisations are still struggling to generate meaningful returns from their AI initiatives. For investors, the uncertainty alone is enough to trigger a knee-jerk reaction.
Here's what I think: as AI capabilities continue to evolve, they will throw an increasing number of industries into disarray. My current hypothesis is that we can expect years of transitions, even if AI stops getting better today. Of course, there's no sign of it stopping, let alone slowing down. And that's precisely the point.
If there's one certainty, it's that we are still far, far away from understanding what AI will ultimately mean for the sector. And if that's true of cybersecurity, it's true of just about every other industry AI touches.
My advice? Stop waiting for the dust to settle. Love AI or hate AI, these transitions are the new normal, and those who adapt to operating in a state of constant flux will be better positioned for what's ahead. Buckle up.
A version of this first appeared in the commentary of my free Tech Stories newsletter that goes out every Sunday. This newsletter comes with a digest of other stories that I wrote in the preceding week. To get it in your inbox, sign up here.





