During a weekly scan review, you notice a CVE published only two days ago. The entry shows a CVSS v3.1 base score of 6.5, and no proof-of-concept code is yet public. A senior engineer asks whether this vulnerability should bump higher-severity items out of the next maintenance window. You recall a community-driven, data-science model that ingests exploit feeds, dark-web chatter, and historical breach data to calculate the probability-between 0 and 1-that the flaw will be exploited in the wild within the next 30 days. Which of the following statements best captures how that model works?
It assigns a probability that attackers will create or use a working exploit in the near term
It synchronizes update notifications with software creators' normal release cadences
It calculates how broadly the flaw could propagate across diverse systems and networks
It gauges vendor patch quality by monitoring public tests of common exploit code
The model in question is the Exploit Prediction Scoring System (EPSS). EPSS applies statistical and machine-learning techniques to vulnerability attributes and real-world threat intelligence to produce a probability (0-1) that a CVE will be exploited in the next 30 days. This probability rating distinguishes EPSS from metrics that focus on severity (CVSS), prevalence, or vendor patch timelines. Therefore, the option describing a probability that attackers will develop or use an exploit is correct. The other statements emphasize spread, patch evaluation, or vendor release schedules-none of which reflect EPSS's predictive objective.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Exploit Prediction Scoring System (EPSS)?
Open an interactive chat with Bash
How is EPSS different from CVSS?
Open an interactive chat with Bash
What factors does EPSS consider to calculate exploitation likelihood?