Get the latest, first
arrowBlog
How do different organizations manage vulnerabilities in their cloud environments?

How do different organizations manage vulnerabilities in their cloud environments?

Jul 31, 2025

Ben Hirschberg
CTO & Co-founder

Everyone talks about it, now one knows how others do it. Until now. Read on…

What is a vulnerability, and why do we need to manage them?

In security conversations, “vulnerability” has become a word we throw around with assumed urgency, as if its mere presence demands immediate action. But what is a vulnerability, really? In essence, it’s a flaw or weakness in a system that could be exploited under specific conditions. And that distinction “could” is where things get problematic. Not every vulnerability is a ticking time bomb; many are theoretical risks that never translate into real-world breaches. Yet we chase them relentlessly. Why?

I could cite here “The road to hell is paved with good intentions”. Both ‘good intentions’ and ‘hell’ are descriptive words in this context. Everyone who works with vulnerabilities on a daily basis can attest to this, at least for the latter. So, why do we chase them? The reasons are complex. Some are external, such as compliance frameworks like SOC 2, ISO 27001, and PCI, which require us to identify and mitigate vulnerabilities as part of maintaining a secure posture. Others are internal: security teams want to reduce risk exposure, ensure uptime, and prevent the nightmare of a real incident triggered by a known vulnerability. No one wants to be “that guy/girl” who failed to address a vulnerability that allowed attackers to access the system.

But here’s the reality, few say out loud: vulnerabilities are not something you can completely eliminate. You manage them. Like firefighting or (god forbid) chronic illness, it’s not about total eradication; it’s about monitoring, prioritizing, and responding wisely. The very language we use  “vulnerability management” acknowledges that we are in an ongoing battle, not a clean-up job. That doesn’t mean we’re failing; it means we’re being realistic. Why realistic? Because there are so many that the organizations cannot eradicate them, they can prioritize handling the most risky ones.

Where to Scan: Choosing the Right Stage for Identifying Vulnerabilities

Once we accept that vulnerabilities must be managed, not just “fixed”, the next question becomes: Where in the software lifecycle should we look for them? This seemingly simple decision has massive implications for both security and engineering velocity.

Many organizations scan software packages, container images, and VM images at various stages: during development, in CI/CD pipelines, in staging environments, and in production. Each stage offers advantages, but also limitations.

Scanning in development (“shift-left” approach), during coding, or when dependencies are added, is fast and developer-friendly. Tools like IDE plugins, local scanners, or even those that scan in SCM, can catch issues early, ideally before code is even committed, but before they hit common branches. But here, the context is limited. You might flag vulnerabilities in packages that are never actually used in production, which is a waste of resources. We will get back to this point a bit later. Talking with many players in the engineering and security field, this approach is shining for in-house development. When you have a Python application and the developers are notified that one of the dependency packages has a critical vulnerability, it is much easier to fix than finding the problem out at a later stage. One of the most expensive things in engineering is the context switches of developers and management. This approach is meant to notify developers about vulnerabilities while they are “in context”. However, two problems limit the effectiveness of this approach: one is that not every vulnerability comes from in-house built applications (industry estimates that around 70-80% of containers running in production environments are pure open-source), thus not in the development pipeline of the organization. The second reason is that a Python package that has no known vulnerabilities today (and passes developers and gets shipped with) might get flagged for vulnerabilities in a month from now. This case cannot be left uncovered by the security team.

Scanning in software repositories has been around since containerization became a leading way to run workloads in the cloud. Numerous organizations scan container images in their container registries. In this approach, the development teams and DevOps put their software deliveries into the container registry (which they do anyway), and the security team gets access to these repositories and scans the images for vulnerabilities there. The advantage of this approach is that container images are monitored beyond the development phase, and both the in-house and external container images are monitored equally, which is a great advantage. This scanning strategy has been around since it is easy to implement and gives a high degree of coverage (but not full coverage, who enforces that engineering only deploy images from monitored repositories?). The biggest downside of this approach is the total lack of context of the findings. Let’s say that a critical vulnerability is reported in one of the images in one of the repositories. What now? Does this vulnerability really pose a risk? Where is this image deployed? This information is not available in the registry. Is it deployed at all? Is the software in the image that has this vulnerability running the container? Does this container get outside network traffic? All these questions are hard to answer because we are finding the vulnerability in a transitional phase, in between development and production, where it lacks both the context of the actual use and the context of the intimate developer knowledge of the software. In most industry applications, we have seen that this approach is a temporary phase, and while organizations might start with it due to the ease of implementation, they move on due to the problems it brings later.

CI/CD scanning is a common approach. It allows teams to enforce quality gates based on existing policies: “Don’t ship code with critical vulnerabilities”. It seamlessly integrates into DevSecOps workflows, providing security teams with visibility without slowing developers down. But here’s the catch: an image that’s clean today may be vulnerable tomorrow. New CVEs are published constantly. That base image you built last week? It could now contain a critical flaw that wasn’t known at the time. 

Another property of this approach is that it cannot really be put into or considered in the context of a vulnerability. In the CI/CD, we can scan the software, save the results, but the most practical is to use it as an automatic quality gate, as mentioned earlier. In this automatic gate, it is challenging to incorporate considerations that the security or engineering team can suggest. For example, let’s say that the DevOps knows that there is a critical vulnerability in one of the tools in the image, but it is not invoked in the production use-cases of the image that is being built/deployed, therefore not posing a threat. This leads people to invest resources into non-existent problems while the delivery is delayed or blocked completely. In real-world applications, this approach has some merits by elevating security hygiene to some extent; however, it doesn’t provide full coverage, as the time of check and the time of use differ, leading to missed vulnerabilities.

Runtime or Production scanning (“shift-right approach”) solves this “drift” problem. In this case, the virtual machines and container images are scanned for vulnerabilities running on the actual infrastructure. This approach lets you continuously monitor what’s actually running, even long after the initial deployment. It provides the most accurate picture of risk and a lot of context, as you are monitoring vulnerabilities where they actually matter. The downside is that it’s too late to block the deployment from running, and now you’re dealing with live services. However, you have the clearest picture of what is running and where, thus it is the easiest among all the aforementioned options to decide whether a vulnerability really poses a risk. Modern CSPMs are focusing their efforts on bringing you as much context as possible to assist the security teams in reaching a verdict. 

Each scanning stage reveals a different slice of reality. None is perfect. Which is why modern cloud security isn’t about picking just one, it’s about layering them, understanding what each tells you, and most importantly, knowing which vulnerabilities matter in context. Here is a table to sum up this discussion

Scanning stageDescriptionStrengthsLimitationsBest Fit For
Development (“Shift Left”)Scanning during coding, dependency addition, or in SCM before merging into main branches– Early feedback for developers- In-context remediation- Reduces the cost of late-stage fixes– Lacks usage/runtime context- Doesn’t cover open-source containers- Doesn’t protect against future CVEsIn-house developed applications with active development teams
Container RegistryScanning images stored in internal or external repositories– Covers both internal and 3rd-party images- Easy to implement- Monitors post-dev– Lacks deployment/runtime context- May scan unused images- Can’t assess exposure or active riskOrganizations are starting to formalize vulnerability management, but don’t really want to invest in it.
CI/CD PipelineScanning as part of automated builds and deployments, often as a quality gate– Enforces policy gates while integrating into DevSecOps workflows- Improves hygiene– Can’t include runtime context- Time-of-check ≠ time-of-use- Might block unnecessarilyMature DevOps pipelines enforcing secure delivery policies
Runtime (“Shift Right”)Scanning what’s actively running on the production infrastructure– Highest contextual fidelity- Detects actual risk- Enables informed decisions– Detection happens after deploymentReal-world risk assessment and prioritization in production

Context is Everything: Why Usage Matters More Than Existence

A vulnerability in a piece of software sounds like a big deal, and sometimes it is. But context defines whether it’s a theoretical concern or a real-world threat. Without context, we’re treating every vulnerability the same way, regardless of whether it’s in a live authentication service or a dormant sidecar container in a staging environment.

Take the classic example: a library has a known vulnerability. Perhaps even a critical one. But is that library actually used at runtime? Is it part of a path exposed to the internet? Is it running in a production namespace or a sandboxed test pod? Without those answers, we’re just guessing at risk.

Contextual information transforms vulnerability data from static alerts into actionable intelligence. It answers questions like:

Where is this software running? Dev, staging, or prod?

Who can reach it? Is it public-facing or internally scoped?

What does it do? Is it a core authentication service or a low-priority batch job?

Is the vulnerable code used? Or is it just a transitive dependency?

This kind of nuance enables teams to transition from blind patching everything (an impossible goal in most cases, IMHO) to intelligent triage. A medium-severity vulnerability in an internet-exposed login service may deserve more urgent attention than a critical CVE in a debug tool only used in dev. Yet many tools still prioritize based purely on CVSS scores, leaving security teams overwhelmed with noise and underwhelmed with value.

Context turns a pile of alerts into a meaningful prioritization strategy. 

Drowning in Vulnerabilities: Even Small Teams Aren’t Immune

There’s a common misconception that vulnerability overload is a “big company problem.” The truth? Even a small startup with a few microservices can become overwhelmed by security findings. Why? Because modern software isn’t small. It’s layered, interconnected, and built on thousands of dependencies you didn’t write and can barely pronounce.

A single container image might include hundreds of packages. Add a base image, a language runtime, a couple of utilities, and suddenly you’re scanning against tens of thousands of CVEs. Multiply that by every service, every environment, every update, and even modest setups generate thousands of vulnerabilities in minutes.

And not all of them matter. Many are:

  • In packages not used at runtime
  • In services not deployed to production
  • Patched but not reflected in the scanner’s last index update
  • Completely irrelevant to your threat model

Still, they show up red. And when everything is red, nothing stands out.

This leads to the worst of both worlds: overwhelmed security teams and frustrated developers. One side feels buried under a mountain of alerts; the other sees security as an obstacle, not an ally. And all the while, the truly important vulnerabilities, the ones with real-world impact, risk getting buried in the noise.

This is where context, prioritization, and process begin to matter more than raw detection. Because identifying vulnerabilities is no longer the hard part. Knowing which ones matter is.

From Volume to Value: How Organizations Prioritize Vulnerabilities

Once the flood of vulnerabilities begins — and it will — the real challenge isn’t finding them. It’s deciding what to do next. Prioritization is where vulnerability management becomes either effective… or paralyzing.

Most organizations start with vulnerability-based prioritization, ranking findings based on severity scores, such as CVSS. If it’s labeled “Critical,” it goes to the top of the list. Seems logical until you realize that “Critical” doesn’t mean “critical to you”. It could be a high-risk bug in a feature your software never touches, or in a package that’s not even loaded at runtime.

That’s where exploitability-based prioritization comes in. This model considers two things. One, whether a vulnerability is actively exploited in the wild, has a public proof of concept, or is included in a threat intelligence feed (CISA-KEV), or is expected to have a public exploit soon (EPSS metric). Two, whether the vulnerability could be exploited in the way the software is used in your environment, but we are going ahead of ourselves. Exploitability-based prioritisation is a significant step forward, narrowing the list to what attackers actually care about. However, it still doesn’t take into account your specific environment or business context.

Last, but not least, enter context-aware prioritization, a model that aligns technical findings with business risk. Here, a low-severity vulnerability in a critical workload (say, the identity provider for your entire platform) might outrank several “critical” CVEs in development tools or isolated services.

This approach considers:

  • Workload sensitivity (Is this handling personal data or authentication? If an attacker exploits this workload, what/could they put their hands on?)
  • Environment (Is it running in production or test?)
  • Application role (Is it part of infrastructure, a customer-facing app, or just logging?)
  • Exposure (Can it be reached from the internet?)
  • Reachability is the vulnerable code used in the application?
  • Runtime state (Is it even running right now?)

Note: the latter three points (exposure, reachability, runtime state) are somewhat overlapping with the question “whether a vulnerability can be exploited” thus connected to the “exploitability-based” prioritization, while all the other factors are before considering “what can happen if an attacker can penetrate the vulnerable workload”.

The result? Security teams get fewer, more meaningful tickets. Developers see less noise. And leadership can finally tie vulnerability risk to actual business impact.

Prioritization isn’t just about filtering noise. It’s how organizations turn an endless problem into a manageable one. And in a world where perfect security is impossible, smart prioritization is what defines mature cloud security operations.

Wrapping Up: Managing the Unmanageable

Cloud vulnerability management isn’t about reaching zero. It’s about navigating the chaos with clarity. Every organization scans. Every organization finds vulnerabilities. The difference lies in what they do next.

Tools will keep generating findings. New CVEs will keep popping up. And workloads will keep evolving. However, by focusing on context, usage, and business relevance, organizations can cut through the noise and take control of the problem, rather than being controlled by it.

The path to better cloud security isn’t paved with more scanning. It’s paved with smarter decisions, about where to scan, what to fix, and when to act. That’s how modern teams stop chasing red flags and start protecting what matters most.

Close

Your Cloud Security Advantage Starts Here

Access exclusive resources
from industry experts

Webinars
Data Sheets
Surveys and more
Group 1410190284
Ben Hirschberg CTO & Co-Founder
Rotem_sec_exp_200
Rotem Refael VP R&D
Group 1410191140
Amit Schendel Security researcher
slack_logos Continue to Slack

Get the information you need directly from our experts!

new-messageContinue as a guest