OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

was message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [was] Updated WAS Classification Scheme


Ok,


This is an interesting set of comments.  

I think that we cannot include the risk aspect within this part of the
schema.  I think this piece of the schema should be exclusive to
severity ranking.

However, we could enhance the overall schema to provide a transport for
the risk model.  In other words provide those place holders to define
the business aspect of the asset in question - this would fall into some
kind of profile information for the exploitable item.  This would
satisfy the need for some business to transport common risk data like
customer exposure for banks, classified data for the DoD, etc...

I agree that as a consultant we will know about the business impact, but
as a generic Vulnerability language we still need to rank a
Vulnerability in the absence of this information. 

I can take a stab at describing how the ranking data could be combined
with the other risk data elements.  I will remove any quantity factors
(as we do not know those for all systems), and I will add the weighting
model to the elements that are applicable to the model.

I think that in the case of a Vulnerability like the
myPhpCreditCardStore, we will be able to rank that as CRITICAL.  It
seems that a CRITICAL severity will quickly add up to a high risk
vulnerability.  Maybe not?


David Raphael



-----Original Message-----
From: Jeff Williams [mailto:jeff.williams@aspectsecurity.com] 
Sent: Friday, April 02, 2004 9:43 AM
To: Mark Curphey; David Raphael; was@lists.oasis-open.org
Subject: Re: [was] Updated WAS Classification Scheme

Mark,

Great summary of the difficulty here. I think our scheme should allow us
to
express as much information as possible, but needs to clear about what
parts
of risk are not covered.

I'm thinking that in many cases, we WILL know quite a lot about the
business
impact of a vulnerability -- especially if you're an employee or
consultant
working with the company closely and want to use WAS to describe and
track
the issue.

Even if some researcher is testing myPhpCreditCardStore and you find a
SQL
injection, he'll want to be able to say that this will disclose all the
CC's
in the DB, and that Visa and the FTC will levy fines and the company's
reputation will be shot.

So I'm leaning towards a system where we CAN specify as much as we know
about impact, but not required.  The real trick is prioritizing items
where
you don't know enough about the impact.  But that has to be up to the
business.

--Jeff

----- Original Message ----- 
From: "Mark Curphey" <mark.curphey@foundstone.com>
To: "David Raphael" <draphael@citadel.com>; <was@lists.oasis-open.org>
Sent: Friday, April 02, 2004 10:05 AM
Subject: RE: [was] Updated WAS Classification Scheme


Dave,

I know we just touched on discussing various models at the face to face
(i.e. we all know this needs a lot more thought) but here are some of my
thoughts.

I think we need to understand and plan this as our contribution to the
bigger picture of Risk i.e risk is what people ultimately want to
measure. Whats the risk to my business ? WAS is about Vulns which is
only a part of that risk equation and therefore I think we need to find
a way to rank the severity of the vulnerabities in such a way that we
can feed risk systems with meaningful useful data and companies can use
other data to calculate the risk in their own way. But we shouldn't
venture that way ourselves. Its very complex and well outside the realm
of WAS IMHO.

If we focus on NIST 800-30 as a high level way of determining risk it
may help to bring some clarity. It categories vulnerabilities into
operational, technical and management. The things we are dealing with
are obviously technical vulnerabities (although the root cause element
may also help indicate management or operational I guess). But point is
IMHO we need to be careful to define the scope to the vuln and part of
threat only. Part of threat explained in a second.

I think someone building a sig management system around WAS only data,
would want to have several views into the system, # of vulns and # vulns
of a particular severity, # of vulns by type (VulnType), # vulns within
dates etc

If we think of risk = vuln x threat x business impact as a basic model
we can understand how this vuln data would be used.

The real value of this data at a high level to me is when someone is
able to apply the vuln data to an asset and know what the impact to
their business is if that asset was exploited (i.e. threat matured).

We have no idea about the impact to the business so can't feed that in
anyway. We have an idea about the impact to a system (i.e. root
compromise etc but not to the business)

We can feed part of the Vuln (i.e. technical and maybe should influence
operational / management through root cause under certain circumstances)


We do not know the real threat (i.e. the threat of an overflow vuln
being used on a power utility company after NE blackouts is very much
higher than before but a WAS endpoint system wouldn't know the threat
model as we don't know (or shouldn't try and define the end environment.
That said we should feed into the threat portion of a risk model things
like if exploit code exists, if it can be automated etc. I think that's
useful data we should capture and allows people to build better models.

So I think this model should produce a vulnerability severity and threat
indicator which on their own are useful but are really intended to be
fed into risk management systems which is where that stuff is truly
useful.

In your overview we started defining data that a researcher of vuln
analysts may not know about.


*         Quantity of data (%)

*         ...TODO:  Add more consequence factors


I think we can define the vuln severity as a form of potential Impact to
the System (ie data modification, partial system compromise, total
system compromise, exploited remote or local etc etc) and place a
weighting on those factors

An example (and this is just for illustrative purposes) maybe a local
buffer overflow where you needed to be on the local system to exploit

(0.5, negative, 1.0 neutral, 1.5 positive)

Effect is Total system compromise (therefore 1.5) but locally
exploitable (0.5)

Ie Vuln Severity is a factor of Technical Impact Factor and a Threat
Prevalence Factor

That said there are loads of ways to do this.

I am open to any suggestions but would like to keep it simple !

Mark Curphey
Consulting Director
Foundstone, Inc.
Strategic Security

949.297.5600 x2070 Tel
781.738.0857 Cell
949.297.5575 Fax

http://www.foundstone.com <http://www.foundstone.com/>

This email may contain confidential and privileged information for the
sole use of the intended recipient. Any review or distribution by others
is strictly prohibited. If you are not the intended recipient, please
contact the sender and delete all copies of this message. Thank you.



  _____

From: David Raphael [mailto:draphael@citadel.com]
Sent: Thursday, April 01, 2004 6:09 PM
To: was@lists.oasis-open.org
Subject: [was] Updated WAS Classification Scheme



Hello Everyone,



I've updated this document with a rough draft of the Vulnerability
Ranking model.  Please review and pass along any comments you have.  I
will continue to update it this weekend with more detail.





Regards,

David Raphael





[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]