OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

cti-stix message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [cti-stix] Proposal - Admiralty Code + ACH


All:

What if we combine Terry's suggestions about the Admiralty Code with a more classical interpretation of Analysis of Competing Hypotheses (ACH) as has been used in the intelligence community?  This concept is outlined in this chapter of "The Psychology of Intelligence Analysis" from the CIA:

https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art11.html

This would address the Use Case that Patrick outlined below... (which BTW, really coincides with some of the threat intel sharing cases I've observed), while at the same time moves us away from a poorly understood description of "confidence"...that seems to be problematic because of the different assumptions each User brings to the table.

I would see the ACH factor as a Third Dimension to what we've been discussing with respect to Information Reliability.

Realize that I'm looking at this from the POV of the Analyst/User that is trying to take IoCs & cyber observables and any other clues and assemble the bigger picture...without a lot of certainty about the Threat Actor, the motivation, the targeted systems, etc....  In this context speculations about competing hypotheses and how they might be assembled in, for example, a Report object, might be useful.... Where an Information Reliability/ACH measure might be applied (e.g., at the CybOX object level) then becomes useful in interpretation by the Human Analysts of the STIX/CybOX information.

Jane Ginn


On 7/29/2015 5:16 PM, Terry MacDonald wrote:
Not necessarily Bret. It may be an 'hypothesis' that someone is wanting to share - to 'put it out there' to see if anyone agrees with it. Being able to mark something (using the Information reliability scale I posted earlier) as '3 - Possibly True' or '4 - Doubtfully True' then allows consumers to determine what level of confidence they want to assign to the information. 

On Jul 29, 2015, at 16:39, Patrick Maroney <Pmaroney@Specere.org> wrote:

Re: "One comment, if you have no confidence in the accuracy of something, meaning you have not done any due diligence on your end, should you really be sharing it? Isn't this the whole problem with the Internet today? People spewing forth crap that is just wrong, and then it gets archived in Google as Gospel. "

Sharing unfiltered, unvetted intelligence on Emerging Threats/Previously Unrecognized Threats is extremely valuable in many of the communities I participate in.  The critical element is to properly mark it.  For example one community uses "Investigating" to flag something as preliminary (say I've analyzed 100% 0Day APT Malware, and I run Strings on the binary and get 50 IP Addresses and Domains.  Yet the Malware was only observed to attempt communication with 6/50.  By sharing this type of intelligence [WITH CONTEXT] with the community others can be aware and say, Hey!!!  I didn't see the vectors you did, but I did see a different subset of 6 out of your 50.  We ran it in an air gapped sandbox and when the test access to Google.com failed the Malware beaconing switched to these different IPs and Ports.  Let's mark these 6 new IOCs as actionable and let everyone know the malware may behave differently in different environments and to keep an eye out for the other 38 IOCs."

Capturing and retaining properly marked indicators has also revealed key discoveries years later:  for example: "Hey we were investigating something else and our search revealed APT Actor "X" was indeed using nameyourfavoritecommoditybotnet back in 2011!!!!  We didn't realize we had actionable indicators at the time.  Thanks for posting those informational Strings back in 2011!!!!!

Filtering Intelligence will significantly impede detection of multi-Stage exploitation and variants of RATs deployed in the Entrenchment phase of lateral movement once adversary has established their initial beachhead and begins deploying.  The key is to ensure you convey all of the context you've developed (and "show your work" to back any of your assertions).  


On Wed, Jul 29, 2015 at 3:06 PM -0700, "Terry MacDonald" <terry.macdonald@threatloop.com> wrote:

I was thinking back to the Admiralty Code (https://en.wikipedia.org/wiki/Admiralty_code) regarding reliability and credibility when I wrote that.  The idea was if someone had learned from a 3rd party that there was a relationship between Threat Group A and Threat Group B, but had not yet been able to determine the reliability/truthfulness of what that third party had said. They may want to send out that relationship, as kind of a 'something we've heard but not had a chance to verify ourselves'. That's where I was headed with the Unknown option.

Maybe it would be better to use the terms from Information Reliability instead of Confidence when describing relationships within STIX? (https://en.wikipedia.org/wiki/Intelligence_source_and_information_reliability)

Rating Description
1 Confirmed Logical, consistent with other relevant information, confirmed by independent sources.
2 Probably true Logical, consistent with other relevant information, not confirmed.
3 Possibly true Reasonably logical, agrees with some relevant information, not confirmed.
4 Doubtfully true Not logical but possible, no other information on the subject, not confirmed.
5 Improbable Not logical, contradicted by other relevant information.
6 Cannot be judged The validity of the information can not be determined.




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]