[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: RE: [was] Updated WAS Classification Scheme
I am going to proceed with these assumptions if no one else has any feedback for the first Draft of this document. /d -----Original Message----- From: David Raphael Sent: Monday, April 19, 2004 10:20 AM To: Mark Curphey; Jeff Williams; email@example.com Subject: RE: [was] Updated WAS Classification Scheme Hi everyone, I would like to get a little more feedback on which factors should play into the Vulnerability Severity. Additionally, I was thinking that a 10 point scale for each Axis would provide sufficient granularity in determining the severity. There is another issue as well. I am not sure as to where the curves should fall on the graph. With the draft that I sent out, I placed the endpoints of the curves at equal points on the graph axes. This is OK if the Prevalence Factor balances correctly. But I suppose we are just looking for an accurate final output. So as long as we weight the prevalence factor components accurately, we will get an accurate result. Here is another situation that I thought of: What if you have to low severity Vulns. One is low severity REMOTE VNC exploit, and the other is a low severity LOCAL BUFFER OVERFLOW that provides root access. Both are low severity, but combining the 2 exploits provides REMOTE SYSTEM COMPROMISE. Just curious if there is anyway to think about this. Here is what I am looking at so far: Technical Impact Factor Components: - Administrator Access (Total System Compromise): +5 - User Access (Partial System Compromise): +2 - DoS: +2 - Data Read Access: +3 Threat Prevalence Factor Components: - Tools Readily Available: +3 - Difficulty of exploit construction: (+1)-(+3) Note: I think that this is a sliding scale of difficulty. +1 would be very difficult, +3 would be very easy. - Local or Remote: (x.5) for Local, (x1) for Remote Should we correlate the components of the factors with the VulnTypes? E.g. Buffer Overflows, DoS etc...? Feedback appreciated. -Dave -----Original Message----- From: Mark Curphey [mailto:firstname.lastname@example.org] Sent: Friday, April 02, 2004 1:39 PM To: David Raphael; Jeff Williams; email@example.com Subject: RE: [was] Updated WAS Classification Scheme 100% agree. Risk is where this data becomes valuable. Focusing on System Impact for WAS 1.0 seems like the right approach. -----Original Message----- From: David Raphael [mailto:firstname.lastname@example.org] Sent: Friday, April 02, 2004 10:59 AM To: Mark Curphey; Jeff Williams; email@example.com Subject: RE: [was] Updated WAS Classification Scheme I think that the Risk model would be a valuable addition. I think we should definitely put it in the 2.0 roadmap. See my other email suggesting this as a separate component of the profile element. /d -----Original Message----- From: Mark Curphey [mailto:firstname.lastname@example.org] Sent: Friday, April 02, 2004 9:49 AM To: Jeff Williams; David Raphael; email@example.com Subject: RE: [was] Updated WAS Classification Scheme Whilst I totally agree with what you are saying (and we all know the ultimate value of this is in risk management, not vuln management) WAS would be a risk management format not a vuln management format. That's pretty massive scope difference to define all of the elements needed for a risk management format. Also a lot of people are starting to show interest in this concept beyond the realm of just App Sec Vulns i.e. an enterprise vuln management language. Again I think we can all see that's where this will end up, but we need to keep a track on scope here so we can make sure we get WAS 1.0 out in the timeframe we planned (end of April for VulnTypes and Vuln Ranking Model and August for final spec). Any merit in tabling for WAS 2.0 ? -----Original Message----- From: Jeff Williams [mailto:firstname.lastname@example.org] Sent: Friday, April 02, 2004 10:43 AM To: Mark Curphey; David Raphael; email@example.com Subject: Re: [was] Updated WAS Classification Scheme Mark, Great summary of the difficulty here. I think our scheme should allow us to express as much information as possible, but needs to clear about what parts of risk are not covered. I'm thinking that in many cases, we WILL know quite a lot about the business impact of a vulnerability -- especially if you're an employee or consultant working with the company closely and want to use WAS to describe and track the issue. Even if some researcher is testing myPhpCreditCardStore and you find a SQL injection, he'll want to be able to say that this will disclose all the CC's in the DB, and that Visa and the FTC will levy fines and the company's reputation will be shot. So I'm leaning towards a system where we CAN specify as much as we know about impact, but not required. The real trick is prioritizing items where you don't know enough about the impact. But that has to be up to the business. --Jeff ----- Original Message ----- From: "Mark Curphey" <firstname.lastname@example.org> To: "David Raphael" <email@example.com>; <firstname.lastname@example.org> Sent: Friday, April 02, 2004 10:05 AM Subject: RE: [was] Updated WAS Classification Scheme Dave, I know we just touched on discussing various models at the face to face (i.e. we all know this needs a lot more thought) but here are some of my thoughts. I think we need to understand and plan this as our contribution to the bigger picture of Risk i.e risk is what people ultimately want to measure. Whats the risk to my business ? WAS is about Vulns which is only a part of that risk equation and therefore I think we need to find a way to rank the severity of the vulnerabities in such a way that we can feed risk systems with meaningful useful data and companies can use other data to calculate the risk in their own way. But we shouldn't venture that way ourselves. Its very complex and well outside the realm of WAS IMHO. If we focus on NIST 800-30 as a high level way of determining risk it may help to bring some clarity. It categories vulnerabilities into operational, technical and management. The things we are dealing with are obviously technical vulnerabities (although the root cause element may also help indicate management or operational I guess). But point is IMHO we need to be careful to define the scope to the vuln and part of threat only. Part of threat explained in a second. I think someone building a sig management system around WAS only data, would want to have several views into the system, # of vulns and # vulns of a particular severity, # of vulns by type (VulnType), # vulns within dates etc If we think of risk = vuln x threat x business impact as a basic model we can understand how this vuln data would be used. The real value of this data at a high level to me is when someone is able to apply the vuln data to an asset and know what the impact to their business is if that asset was exploited (i.e. threat matured). We have no idea about the impact to the business so can't feed that in anyway. We have an idea about the impact to a system (i.e. root compromise etc but not to the business) We can feed part of the Vuln (i.e. technical and maybe should influence operational / management through root cause under certain circumstances) We do not know the real threat (i.e. the threat of an overflow vuln being used on a power utility company after NE blackouts is very much higher than before but a WAS endpoint system wouldn't know the threat model as we don't know (or shouldn't try and define the end environment. That said we should feed into the threat portion of a risk model things like if exploit code exists, if it can be automated etc. I think that's useful data we should capture and allows people to build better models. So I think this model should produce a vulnerability severity and threat indicator which on their own are useful but are really intended to be fed into risk management systems which is where that stuff is truly useful. In your overview we started defining data that a researcher of vuln analysts may not know about. * Quantity of data (%) * ...TODO: Add more consequence factors I think we can define the vuln severity as a form of potential Impact to the System (ie data modification, partial system compromise, total system compromise, exploited remote or local etc etc) and place a weighting on those factors An example (and this is just for illustrative purposes) maybe a local buffer overflow where you needed to be on the local system to exploit (0.5, negative, 1.0 neutral, 1.5 positive) Effect is Total system compromise (therefore 1.5) but locally exploitable (0.5) Ie Vuln Severity is a factor of Technical Impact Factor and a Threat Prevalence Factor That said there are loads of ways to do this. I am open to any suggestions but would like to keep it simple ! Mark Curphey Consulting Director Foundstone, Inc. Strategic Security 949.297.5600 x2070 Tel 781.738.0857 Cell 949.297.5575 Fax http://www.foundstone.com <http://www.foundstone.com/> This email may contain confidential and privileged information for the sole use of the intended recipient. Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies of this message. Thank you. _____ From: David Raphael [mailto:email@example.com] Sent: Thursday, April 01, 2004 6:09 PM To: firstname.lastname@example.org Subject: [was] Updated WAS Classification Scheme Hello Everyone, I've updated this document with a rough draft of the Vulnerability Ranking model. Please review and pass along any comments you have. I will continue to update it this weekend with more detail. Regards, David Raphael To unsubscribe from this mailing list (and be removed from the roster of the OASIS TC), go to http://www.oasis-open.org/apps/org/workgroup/was/members/leave_workgroup .php. To unsubscribe from this mailing list (and be removed from the roster of the OASIS TC), go to http://www.oasis-open.org/apps/org/workgroup/was/members/leave_workgroup .php.