OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

was message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: RE: [was] Updated WAS Classification Scheme

100% agree. Risk is where this data becomes valuable. Focusing on System
Impact for WAS 1.0 seems like the right approach. 

-----Original Message-----
From: David Raphael [mailto:draphael@citadel.com] 
Sent: Friday, April 02, 2004 10:59 AM
To: Mark Curphey; Jeff Williams; was@lists.oasis-open.org
Subject: RE: [was] Updated WAS Classification Scheme

I think that the Risk model would be a valuable addition.  I think we
should definitely put it in the 2.0 roadmap.  See my other email
suggesting this as a separate component of the profile element.


-----Original Message-----
From: Mark Curphey [mailto:mark.curphey@foundstone.com]
Sent: Friday, April 02, 2004 9:49 AM
To: Jeff Williams; David Raphael; was@lists.oasis-open.org
Subject: RE: [was] Updated WAS Classification Scheme

Whilst I totally agree with what you are saying (and we all know the
ultimate value of this is in risk management, not vuln management) WAS
would be a risk management format not a vuln management format. That's
pretty massive scope difference to define all of the elements needed for
a risk management format. 

Also a lot of people are starting to show interest in this concept
beyond the realm of just App Sec Vulns i.e. an enterprise vuln
management language. Again I think we can all see that's where this will
end up, but we need to keep a track on scope here so we can make sure we
get WAS 1.0 out in the timeframe we planned (end of April for VulnTypes
and Vuln Ranking Model and August for final spec).

Any merit in tabling for WAS 2.0 ?

-----Original Message-----
From: Jeff Williams [mailto:jeff.williams@aspectsecurity.com]
Sent: Friday, April 02, 2004 10:43 AM
To: Mark Curphey; David Raphael; was@lists.oasis-open.org
Subject: Re: [was] Updated WAS Classification Scheme


Great summary of the difficulty here. I think our scheme should allow us
to express as much information as possible, but needs to clear about
what parts of risk are not covered.

I'm thinking that in many cases, we WILL know quite a lot about the
business impact of a vulnerability -- especially if you're an employee
or consultant working with the company closely and want to use WAS to
describe and track the issue.

Even if some researcher is testing myPhpCreditCardStore and you find a
SQL injection, he'll want to be able to say that this will disclose all
the CC's in the DB, and that Visa and the FTC will levy fines and the
company's reputation will be shot.

So I'm leaning towards a system where we CAN specify as much as we know
about impact, but not required.  The real trick is prioritizing items
where you don't know enough about the impact.  But that has to be up to
the business.


----- Original Message -----
From: "Mark Curphey" <mark.curphey@foundstone.com>
To: "David Raphael" <draphael@citadel.com>; <was@lists.oasis-open.org>
Sent: Friday, April 02, 2004 10:05 AM
Subject: RE: [was] Updated WAS Classification Scheme


I know we just touched on discussing various models at the face to face
(i.e. we all know this needs a lot more thought) but here are some of my

I think we need to understand and plan this as our contribution to the
bigger picture of Risk i.e risk is what people ultimately want to
measure. Whats the risk to my business ? WAS is about Vulns which is
only a part of that risk equation and therefore I think we need to find
a way to rank the severity of the vulnerabities in such a way that we
can feed risk systems with meaningful useful data and companies can use
other data to calculate the risk in their own way. But we shouldn't
venture that way ourselves. Its very complex and well outside the realm

If we focus on NIST 800-30 as a high level way of determining risk it
may help to bring some clarity. It categories vulnerabilities into
operational, technical and management. The things we are dealing with
are obviously technical vulnerabities (although the root cause element
may also help indicate management or operational I guess). But point is
IMHO we need to be careful to define the scope to the vuln and part of
threat only. Part of threat explained in a second.

I think someone building a sig management system around WAS only data,
would want to have several views into the system, # of vulns and # vulns
of a particular severity, # of vulns by type (VulnType), # vulns within
dates etc

If we think of risk = vuln x threat x business impact as a basic model
we can understand how this vuln data would be used.

The real value of this data at a high level to me is when someone is
able to apply the vuln data to an asset and know what the impact to
their business is if that asset was exploited (i.e. threat matured).

We have no idea about the impact to the business so can't feed that in
anyway. We have an idea about the impact to a system (i.e. root
compromise etc but not to the business)

We can feed part of the Vuln (i.e. technical and maybe should influence
operational / management through root cause under certain circumstances)

We do not know the real threat (i.e. the threat of an overflow vuln
being used on a power utility company after NE blackouts is very much
higher than before but a WAS endpoint system wouldn't know the threat
model as we don't know (or shouldn't try and define the end environment.
That said we should feed into the threat portion of a risk model things
like if exploit code exists, if it can be automated etc. I think that's
useful data we should capture and allows people to build better models.

So I think this model should produce a vulnerability severity and threat
indicator which on their own are useful but are really intended to be
fed into risk management systems which is where that stuff is truly

In your overview we started defining data that a researcher of vuln
analysts may not know about.

*         Quantity of data (%)

*         ...TODO:  Add more consequence factors

I think we can define the vuln severity as a form of potential Impact to
the System (ie data modification, partial system compromise, total
system compromise, exploited remote or local etc etc) and place a
weighting on those factors

An example (and this is just for illustrative purposes) maybe a local
buffer overflow where you needed to be on the local system to exploit

(0.5, negative, 1.0 neutral, 1.5 positive)

Effect is Total system compromise (therefore 1.5) but locally
exploitable (0.5)

Ie Vuln Severity is a factor of Technical Impact Factor and a Threat
Prevalence Factor

That said there are loads of ways to do this.

I am open to any suggestions but would like to keep it simple !

Mark Curphey
Consulting Director
Foundstone, Inc.
Strategic Security

949.297.5600 x2070 Tel
781.738.0857 Cell
949.297.5575 Fax

http://www.foundstone.com <http://www.foundstone.com/>

This email may contain confidential and privileged information for the
sole use of the intended recipient. Any review or distribution by others
is strictly prohibited. If you are not the intended recipient, please
contact the sender and delete all copies of this message. Thank you.


From: David Raphael [mailto:draphael@citadel.com]
Sent: Thursday, April 01, 2004 6:09 PM
To: was@lists.oasis-open.org
Subject: [was] Updated WAS Classification Scheme

Hello Everyone,

I've updated this document with a rough draft of the Vulnerability
Ranking model.  Please review and pass along any comments you have.  I
will continue to update it this weekend with more detail.


David Raphael

To unsubscribe from this mailing list (and be removed from the roster of
the OASIS TC), go to

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]