We’ve condensed all the key takeaways into a handy audio summary.
The year 2025 marks a turning point in cybersecurity. It’s the year the floodgates opened in the world of cyber. For years, the number of publicly disclosed vulnerabilities, tracked as Common Vulnerabilities and Exposures (CVEs), has been climbing steadily. But in 2025, that steady climb accelerated into a dramatic surge that is pushing traditional security practices to their breaking point.
This isn’t just about more bugs being found. It’s a fundamental crisis that has made old ways of thinking obsolete. The sheer volume of CVEs has shattered the idea that we can analyze and patch every single one. Instead, the focus has been forced to shift from a game of numbers to a battle of context and risk. The data is clear: the future of cybersecurity belongs to those who can effectively prioritize and manage threats, not those who try to catch every drop in an ocean.
To understand where we are, it’s important to look at the numbers. The surge didn’t happen overnight; it’s the culmination of a multi-year trend. These numbers are according to the NVD CVEs count yearly.
The first nine months of 2025 have already seen 35196 CVEs published, and it is projected that the total for the year will be between ~45,000 and 50,000 new vulnerabilities. So while considering how 2025 currently looks, the year-end projection suggests it will overtake 2024 with around a 12.6% – 25.1% rise.
These numbers aren’t just statistics; they represent a fundamental shift in the security landscape. The volume alone has created a massive data backlog, forcing every organization to completely rethink how it manages vulnerabilities.
The reasons behind this surge are a complex mix of technical and systemic changes.
The biggest driver of the surge is the explosive growth and complexity of open-source software. A recent report found that a staggering 97% of all commercial applications contain open-source components. The problem, however, lies in what’s known as “transitive dependencies”, hidden components that other components rely on. Up to ~64% of open-source components are these hidden dependencies, meaning a single flaw in one library can affect hundreds or thousands of applications downstream.
Artificial intelligence (AI) is a major accelerant for both attackers and defenders. Attackers, too, are turning AI into a powerful ally. Instead of relying only on manual analysis, threat actors now use AI-driven tools to scan massive codebases, hunt for weak points, and even generate an exploit proof-of-concept in record time. What once required weeks of effort can now be accomplished in hours, giving them a faster path from vulnerability discovery to weaponization.
On the other side, security researchers are also using AI to their advantage. Large language models (LLMs) can analyze massive amounts of code, identify patterns, and generate test cases, speeding up vulnerability discovery and making human researchers more efficient. But this creates a new challenge: AI systems themselves are a new attack surface, vulnerable to their own unique threats like prompt injection and data poisoning. The cybersecurity world is now locked in an “AI cyber arms race” where both sides are advancing at an accelerated pace.
The surge in CVEs also reflects an expansion of the threat landscape beyond traditional IT. The proliferation of connected IoT devices has created a vast new attack surface, with security vendors reporting significant year-on-year increases in IoT attacks in 2025. Many of these devices suffer from “insecurity by design,” with common flaws like weak default passwords and unencrypted data.
The most significant shift, however, is the focus on Operational Technology (OT), which controls critical infrastructure like power grids and manufacturing plants. In 2024, more than half of cyber incidents reported to the U.S. Securities and Exchange Commission (SEC) involved attacks on OT, making it a new, high-stakes battleground. A vulnerability in these systems isn’t just a data risk; it’s a threat to public safety and business resilience.
The confluence of these trends means that the traditional approach to vulnerability management is no longer viable.
The old, periodic approach of quarterly or monthly vulnerability scans is now obsolete. The time-to-exploit for a vulnerability has shrunk dramatically, with over ~28 – 32% of known exploited vulnerabilities (KEVs) having exploitation evidence disclosed within one day of their CVE publication.
To keep up, organizations must pivot to a next-generation model that is:
Ultimately, this isn’t just a technical problem; it’s a financial one. The average cost of a data breach for U.S. companies reached an all-time high of $10.22 million in 2025. A significant 20% of these breaches started with the exploitation of a known vulnerability.
Basic vulnerability scans can cost $1,000 to $1500, while more comprehensive assessments for large environments are higher, but even at the high end, this remains a fraction of the cost of a single breach. The CVE deluge of 2025 makes it clearer than ever that a failure to invest in a modern vulnerability management program is a far costlier decision than it has ever been.
The great CVE deluge is the new normal. The raw count of vulnerabilities is no longer a meaningful security metric. As the volume continues to rise and data sources diverge, the focus will shift entirely to contextual risk. The ongoing “AI cyber arms race” and a growing number of government regulations will only intensify these pressures.
Navigating this landscape requires more than just a list of patches; it requires a strategic, risk-based approach that prioritizes what truly matters. Only those organizations that adapt to this new reality will be able to protect themselves in an increasingly complex and hostile world. In today’s environment, proactive security isn’t a luxury; it’s the price of survival.