OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

cti message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [cti] RE: Versioning Background Docs


Taylor, Marlon wrote this message on Mon, Mar 14, 2016 at 08:06 +0000:
> An email for our international members and US all-nighters.
> 
> I was going to type up my strategy for versioning but as I reviewed the top 2 links(see below), I saw Sean summarized it in Option 3 (honestly I was going to say the same thing, basically). The only difference in my approach is the use of a HASH rather than a GUID for the ID. So format: [ObjectType-HASH]
> 
> Since Sean did such a good job analyzing and summarizing a point I was going to make, I'll focus on my reasons for a HASH over a GUID.
> 
> HASH based IDs:
> -          Meaningful IDs: with GUIDs there is no relationship between the ID and content however hashed based IDs will be directly related to the content.

The biggest issue I see is that tracking relationsihps will be
difficult...

> -          Secret searches: someone could request info on [ObjectType-HASH] without having to convey what [ObjectType-HASH] is. This could be done with GUIDs however GUIDs are not based on the content so there’s no assurance, without first requesting that object, that you are getting what you think you’re referencing (you might want to always ask for a copy of what you’re referencing).
> -          Duplicate detection: since the hashing process will be part of the standard (part of this idea) consumers/producers/brokers will be able to determine if a given content is a duplicate or not.

This doesn't solve the problem where the objects are exactly the same,
except that one has a misspelling in the description..  Though they are
the "same", their hashes won't match...

> o   This is an issue that hasn’t been addressed formally but can be resolved CTI wide right here.
> -          Data Integrity/Assurance: since the hashing process will be know, consumers can double-check the provided hash by computing the hash to determine if there is mismatch.

This only addresses one aspect of it... You still need to get the
parent hash signed by the author to ensure the validity of that
parent hash...

> o   This addresses concerns about ensuring all needed data is passed along with content.
> 
> A question about tracking evolving content such as incident/campaign has been brought up(Sean also mentioned this concern).
> 
> Questions/Answers:
> 1.       How will this work for evolving content(or series) such as incidents?

Having to reissue all relationships when one object changes is scary..
Don't forget then you have to remove all the relationships that these
new relationships replaced...

The biggest issue that I see w/ this proposal is the gap problem...

Say I have object A, and I issues version a, b and c...  I know that b
is derived from a because there is a new_version: 'a' field, and that c
is a revision of b...  If a receiver never gets b, and b is unobtainable
by them (say the originator deleted it, and b no longer exists), there is
no way to track that c is a revision of a w/o including the entire history
of the chain in c...

For a few versions, this isn't a problem, but if you get an object w/ a
few hundred versions, the chances that the chain breaks because of a
missing object increases...

The nice thing about this proposal is that a report or package is just a
merkle tree (aka blockchain), and simply signing this hash authenticates
the entire report/package...  But I believe that the complications out
weigh the advantages...

-- 
John-Mark


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]