Subject: RE: [was] Updated WAS Classification Scheme
I know we just touched on discussing various models at the face to face (i.e. we all know this needs a lot more thought) but here are some of my thoughts.
I think we need to understand and plan this as our contribution to the bigger picture of Risk i.e risk is what people ultimately want to measure. Whats the risk to my business ? WAS is about Vulns which is only a part of that risk equation and therefore I think we need to find a way to rank the severity of the vulnerabities in such a way that we can feed risk systems with meaningful useful data and companies can use other data to calculate the risk in their own way. But we shouldn't venture that way ourselves. Its very complex and well outside the realm of WAS IMHO.
If we focus on NIST 800-30 as a high level way of determining risk it may help to bring some clarity. It categories vulnerabilities into operational, technical and management. The things we are dealing with are obviously technical vulnerabities (although the root cause element may also help indicate management or operational I guess). But point is IMHO we need to be careful to define the scope to the vuln and part of threat only. Part of threat explained in a second.
I think someone building a sig management system around WAS only data, would want to have several views into the system, # of vulns and # vulns of a particular severity, # of vulns by type (VulnType), # vulns within dates etc
If we think of risk = vuln x threat x business impact as a basic model we can understand how this vuln data would be used.
The real value of this data at a high level to me is when someone is able to apply the vuln data to an asset and know what the impact to their business is if that asset was exploited (i.e. threat matured).
We have no idea about the impact to the business so can't feed that in anyway. We have an idea about the impact to a system (i.e. root compromise etc but not to the business)
We can feed part of the Vuln (i.e. technical and maybe should influence operational / management through root cause under certain circumstances)
We do not know the real threat (i.e. the threat of an overflow vuln being used on a power utility company after NE blackouts is very much higher than before but a WAS endpoint system wouldn't know the threat model as we don't know (or shouldn't try and define the end environment. That said we should feed into the threat portion of a risk model things like if exploit code exists, if it can be automated etc. I think that's useful data we should capture and allows people to build better models.
So I think this model should produce a vulnerability severity and threat indicator which on their own are useful but are really intended to be fed into risk management systems which is where that stuff is truly useful.
In your overview we started defining data that a researcher of vuln analysts may not know about.
· Quantity of data (%)
· …TODO: Add more consequence factors
I think we can define the vuln severity as a form of potential Impact to the System (ie data modification, partial system compromise, total system compromise, exploited remote or local etc etc) and place a weighting on those factors
An example (and this is just for illustrative purposes) maybe a local buffer overflow where you needed to be on the local system to exploit
(0.5, negative, 1.0 neutral, 1.5 positive)
Effect is Total system compromise (therefore 1.5) but locally exploitable (0.5)
Ie Vuln Severity is a factor of Technical Impact Factor and a Threat Prevalence Factor
That said there are loads of ways to do this.
I am open to any suggestions but would like to keep it simple !
From: David Raphael [mailto:email@example.com]
Sent: Thursday, April 01, 2004 6:09 PM
Subject: [was] Updated WAS Classification Scheme
I’ve updated this document with a rough draft of the Vulnerability Ranking model. Please review and pass along any comments you have. I will continue to update it this weekend with more detail.