I really don’t agree with #4 here, it’s ambiguous. It means once every (roughly) 1,000,000 documents one will be issued with a precision of “second” rather than “microsecond” because the natural microsecond value is 0. Then every 60,000,000 (granted getting
rare here, but if you talk billions/day) it will accidentally have a precision of “minute” rather than “microsecond”.
This is not to say that we need a precision field, just that if we do it should be explicit rather than implicit.
I have been going back and forth on the usefulness of the precision field. Perhaps we could easily get by in a workflow condition to NOT have a precision field as Jason states.
Things I think we can agree on so far:
1) A timestamp format of yyyy-mm-dd-Thh:mm:ss.mmmmmm+-hh:ss MUST be used
Examples: 2015-11-23T13:35:12.000000+00:00 (for 1:35:12 in UTC format)
2) All timestamps MUST be in UTC format a UI will change them as needed for an analyst
3) Timestamps will have 6 digits of precision
4) Any values that are not known will be zeroed out, say I only know the date not the time
A) Is it valid to put in a timezone offset from UTC? Or must the value be actually "in" UTC.
B) Do we actually need to manually say what the precision is? Meaning do we need to call out that it is a "year", "month", "day", "hour", "minute", or "second".
i) Sean believes we need this
ii) Jason does not believe we need this. I think I am starting to lean towards Jason on this.
Lets focus on what we can agree on (the stone in the path) and focus our discussions on the remaining open questions. This will enable us to drive this seemingly easy win to consensus.
Bret Jordan CISSP
Director of Security Architecture and Standards | Office of the CTO
Blue Coat Systems
PGP Fingerprint: 63B4 FC53 680A 6B7D 1447 F2C0 74F8 ACAE 7415 0050
"Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg."
Agree 100% on the nanoseconds - if not useful, they should be dropped.
I want to pick up debate here we were having on the Slack channel before it went kapoof. I do not think we should be coming at this from the point of view of "this could be theoretically useful for <x>". This is exactly how STIX got so complicated in the first
We should be coming at this from the point of view of
- What is the minimal amount of information to communicate this data point
- OK, now, what additional information *beyond the minimum" is required to fulfil all
Notice I am using the word "workflow", not use case, this is on purpose. All of these decisions should be made from the point of view of an end to end workflow - not only the producer making the data, but also the consumer of the data, and what usefulness it
could provide them.
So far the requirement for a precision field has assumed that there is a use case on the recpient side for this data - I challenge this. Lets assume we have a mandatory nanosecond-accurate timestamp. What is the workflow by which I would create a timestamp
that would not have nanosecond accuracy, send that to a consumer, and then have the consumer improperly process the information or take invalid action based on that? A use case on Slack was presented by @sbarnum that you could use this for high precision temporal
analysis - but I assert that said analysis still does not require a precision field, because in the only use cases where you would be doing that action, the data would always have precision (no one is going to take human-generated incident responses and perform
millisecond-level temporal analysis on them, that doesn't make any sense)
Product Architect, Security Intelligence, IBM Security Systems
Without data, all you are is just another person with an opinion - Unknown
<graycol.gif>"Struse, Richard" ---11/23/2015 03:42:45 PM---Are there any generally-available tools or technologies that produce timestamps with nanosecond
From: "Struse, Richard" <Richard.Struse@HQ.DHS.GOV>
To: "firstname.lastname@example.org" <email@example.com>, "Jordan, Bret" <firstname.lastname@example.org>,
Trey Darley <trey@SOLTRA.COM>
Cc: Jason Keirstead/CanEast/IBM@IBMCA, Jerome Athias <email@example.com>, "firstname.lastname@example.org"
<email@example.com>, "Wunder, John A." <firstname.lastname@example.org>, Patrick Maroney <Pmaroney@Specere.org>,
"Sean D. Barnum" <email@example.com>
Date: 11/23/2015 03:42 PM
Subject: RE: [cti-stix] STIX timestamps and ISO 8601:2000
Sent by: <firstname.lastname@example.org>
Are there any generally-available tools or technologies that produce
timestamps with nanosecond precision today? If we can't identify any I
would suggest that we support 6 digits (microseconds) and be done.
This is a trivial but important way that we can communicate to the broader
community that we are rooted in real-world practice.
From: email@example.com [mailto:firstname.lastname@example.org]
On Behalf Of Tony Rutkowski
Sent: Monday, November 23, 2015 2:23 PM
To: Jordan, Bret; Trey Darley
Cc: Jason Keirstead; Jerome Athias;
John A.; Patrick Maroney; Sean D. Barnum
Subject: Re: [cti-stix] STIX timestamps and ISO 8601:2000
It's not inconceivable that fractional microsecond
values matter in virtualization environments.within
the same facility. On a larger scale, the uncertainties
associated with the timestamp value will make
nanosecond precision moot.
Has anyone articulated what the overhead
differential is of an _expression_ with a precision
of microseconds versus nanoseconds?
On 2015-11-23 01:08 PM, Jordan, Bret wrote:
> I miss typed in my last email, I meant to say micro seconds not
> milliseconds, aka 6 digits of precision not 3 digits of precision.
> Wireshark and other networking / security tools are able to work with
> and provide 6 digits of precision. That is VERY common. What is not
> really common today is 9 digits of precision.