The New Hacker Summer
Something feels familiar in cybersecurity right now.
There’s a certain energy in the air. A sense of acceleration. New tools, new workflows, new abstractions. Software is being produced faster than at any point in history. Entire applications materialize from prompts. Infrastructure scaffolds itself. Agents refactor repositories while you watch.
It feels like progress. And it is.
But it also feels like something we’ve seen before.
When the internet first expanded, services were exposed faster than anyone could properly secure them. Connectivity outpaced caution. Functionality came first. Threat modeling came later usually after an incident.
AI-assisted development is triggering a similar dynamic.
We are moving fast. And in the process, we are quietly reintroducing problems the industry already learned how to solve.
When “It Works” Is Good Enough
Large language models are remarkably good at producing clean, structured, readable code. It compiles. It passes tests. It includes comments. It looks like something written by a competent engineer on a productive day.
And that’s exactly the trap.
Because secure software is not defined by whether it runs. It is defined by how it behaves when someone tries to break it.
AI-generated code frequently makes implicit trust assumptions. It wires services together without clearly defining boundaries. It embeds secrets for convenience. It validates input just enough to satisfy the happy path. Dependencies are included because they “solve the problem”, not because they’ve been evaluated.
These are not exotic AI-native vulnerabilities. They’re classics. Injection flaws. Broken access control. Overexposed internal services. Insecure defaults.
We are not facing a new class of weaknesses. We are mass-producing old ones.
When the Assistant Gets Root
The situation becomes more interesting and more dangerous when AI systems are given root.
Modern code agents don’t just suggest changes. They execute shell commands. They modify repositories. They interact with CI/CD systems. They access internal documentation. Sometimes they operate with scoped credentials to perform deployments.
From a security perspective, that’s not a helper. That’s an automated insider.
If these agents aren’t sandboxed, tightly permissioned, and fully auditable, they become high-leverage pivot points. A prompt injection is no longer just a funny demo, it’s potentially a path to credential exposure, unauthorized code changes, or infrastructure drift.
We are onboarding (semi-)autonomous contributors into production environments.
The Return of Low-Hanging Risk
A few years ago, many mature organizations had closed the obvious gaps. Patch management was structured. Logging pipelines were established. Secure coding guidelines were enforced in CI. Improving security often meant incremental optimization, not foundational repair.
AI adoption has shifted that baseline.
In many environments, there is no clear governance for AI-generated code. No standardized review requirement. No defined policy for agent permissions. No logging of sensitive prompts. No security-specific constraints embedded in the workflow. No understanding how AI should be used.
The result isn’t chaos. It’s something subtler: quiet regression.
Code Is Cheap. Review Is Not
The economics of software creation have changed.
What once required a coordinated team and weeks of effort can now be generated in minutes. Functional code is abundant. It is no longer the outcome of many hard working hours.
Engineering review still is.
Security properties, architectural coherence, performance implications, and long-term maintainability do not emerge automatically from generated output. They require deliberate review by engineers who understand threat models, coding structure, and operational constraints.
Unreviewed AI-generated code should be treated as untrusted input. If it is merged without alignment to established standards and architecture, it becomes technical debt immediately. “Vibe code” is legacy code the moment it lands unless it is intentionally engineered into the system.
Speed increases output. It does not reduce responsibility.
History Doesn’t Repeat, But It Accelerates
The early internet era forced painful lessons about input validation, authentication, segmentation, and secure configuration. Over time, those lessons hardened into standards and tooling. The baseline improved.
AI-assisted development is now reshaping that baseline again.
If we treat it purely as a productivity tool, we will replay old mistakes with more powerful automation. If we treat it as an architectural shift, we have an opportunity to define secure defaults before insecure ones calcify.
We’ve seen this cycle before.
The difference now is not the nature of the risk. It’s the speed.
And when speed increases, so does blast radius.