OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

cti-stix message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: RE: [cti-stix] STIX timestamps and ISO 8601:2000

Firstly, I apologize for the length of this reply. As always it started with 2 lines and kind of grew from there…..


I can agree with 1-3.


As far as the open questions go:


A)     No. The timestamp should always be in UTC. When performing calculations at scale (e.g. analytics) having all the times in UTC should make things quicker. Also being able to perform text date extractions (say using grep) on STIX doc s would be far easier if we always make sure its UTC.

B)      The issue appears to be that mandating a high level of precision in the timestamp format means that we lose the ability to discern how precise the timestamp actually is.


There seems to be three ways around it that have been presented:


i)                    Allow timestamps to vary based on how precise they are (Discounted due to lack of uniformity)

ii)                   All timestamps are the same format and we send an additional explicit description of how precise the timestamp is.

iii)                 The level of timestamp precision is dependent on the object and the ways that you use the object. All timestamps are the same format, but if a producer doesn’t have the required precision the more detailed values are ‘zeroed’ out.


With option iii) there is the potential for ambiguity. The premise (if I understand it correctly) is that the precision of timestamps required by a consumer for each type of object are different, and that they will naturally match the precision that the producer is able to generate.

·         High precision time measurement appears to have been available in all common processors since Pentium back in 2005. Android, Linux, Windows all support it. ARM and Tegra architectures appear to support it as well.

·         An Observation/Observable Instance generating tool is likely to already use high precision timing, as speed/accuracy matters to that tool. Therefore it will not be onerous to generate high precision STIX timestamps.

·         A STIX threat intel analytics tool will have whatever ability is necessary for processing STIX compliant stuff. If we state that it needs to have microsecond precision, then it will have to support generating that precision.

·         When Threat Analysts are creating content in their Threat Intel analysis tools, they will need to manually enter in the details of things like Threat Actors, and will need to click the ‘save’ button to create the object. At that point, the STIX threat intel tool will be able to write a highly precise timestamp of the exact moment the person pressed the save button. That level of precision may not be that useful for a high-level object such as a Threat Actor, but it will still be possible to do to maintain a consistent timestamp format. Millisecond resolution provides no specific benefit for manually created data.

·         When talking about the estimated start dates of Campaigns or similar, Start_Time, End_Time, are all going to be informed from the detailed Observations that are ultimately associated with the Campaign. Those observations are highly likely to include millisecond level timing.


In the situations that a less accurate timestamp is available (e.g. default Syslog timestamp is accurate to the nearest second) then there are two options available:

·         Configure the log generator to record the milliseconds (e.g. changing rsyslog.conf to use timegenerated rather than timereported)

·         Zero the extra values out. This will be the case for a very small proportion of the population. It also will mean that we are accurate within 1 second minimum.


#4)         IMHO this ties in with B) iii) above.


I think that we have to put this whole discussion in perspective. Most organizations have difficulty in discovering they have a breach within days and weeks, not within a second. So going with B) iii) and having the precision within 1 second realistically is good enough in my opinion. It is far more likely that all the clocks on the network are not synchronized, and all the tools are reporting different and unrelated times and that they are way more than a second out of alignment with each other. The zeroed out millisecond timestamp doesn’t impact us much when we have real-world problems such as that J.




Terry MacDonald

Senior STIX Subject Matter Expert

SOLTRA | An FS-ISAC and DTCC Company

+61 (407) 203 206 | terry@soltra.com



From: cti-stix@lists.oasis-open.org [mailto:cti-stix@lists.oasis-open.org] On Behalf Of Jordan, Bret
Sent: Tuesday, 24 November 2015 8:07 AM
To: Wunder, John A. <jwunder@mitre.org>
Cc: Jason Keirstead <Jason.Keirstead@ca.ibm.com>; Richard Struse <Richard.Struse@HQ.DHS.GOV>; tony@yaanatech.com; Trey Darley <trey@soltra.com>; Jerome Athias <athiasjerome@gmail.com>; cti-stix@lists.oasis-open.org; Patrick Maroney <Pmaroney@Specere.org>; Sean D. Barnum <sbarnum@mitre.org>
Subject: Re: [cti-stix] STIX timestamps and ISO 8601:2000


Okay, so lets pull #4 out and put it in the open questions.  Can we agree on 1-3?  Lets focus on what we CAN agree on, and set stones in the path.  Lets get to one stone at a time.  








Bret Jordan CISSP

Director of Security Architecture and Standards | Office of the CTO

Blue Coat Systems

PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050

"Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg." 


On Nov 23, 2015, at 13:51, Wunder, John A. <jwunder@mitre.org> wrote:


I really don’t agree with #4 here, it’s ambiguous. It means once every (roughly) 1,000,000 documents one will be issued with a precision of “second” rather than “microsecond” because the natural microsecond value is 0. Then every 60,000,000 (granted getting rare here, but if you talk billions/day) it will accidentally have a precision of “minute” rather than “microsecond”.


This is not to say that we need a precision field, just that if we do it should be explicit rather than implicit.


On Nov 23, 2015, at 3:44 PM, Jordan, Bret <bret.jordan@BLUECOAT.COM> wrote:


I have been going back and forth on the usefulness of the precision field.  Perhaps we could easily get by in a workflow condition to NOT have a precision field as Jason states.  


Things I think we can agree on so far:

1) A timestamp format of yyyy-mm-dd-Thh:mm:ss.mmmmmm+-hh:ss MUST be used

Examples: 2015-11-23T13:35:12.000000+00:00  (for 1:35:12 in UTC format)

2) All timestamps MUST be in UTC format a UI will change them as needed for an analyst 

3) Timestamps will have 6 digits of precision

4) Any values that are not known will be zeroed out, say I only know the date not the time

Example: 2015:11:23T00:00:00.000000+00:00


Open Question(s):

A) Is it valid to put in a timezone offset from UTC?  Or must the value be actually "in" UTC.

Example 2015-11-23T13:35:12.000000-06:00


B) Do we actually need to manually say what the precision is? Meaning do we need to call out that it is a "year", "month", "day", "hour", "minute", or "second".

i) Sean believes we need this

ii) Jason does not believe we need this.  I think I am starting to lean towards Jason on this.


Lets focus on what we can agree on (the stone in the path) and focus our discussions on the remaining open questions.  This will enable us to drive this seemingly easy win to consensus.  









Bret Jordan CISSP

Director of Security Architecture and Standards | Office of the CTO

Blue Coat Systems

PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050

"Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg." 


On Nov 23, 2015, at 13:02, Jason Keirstead <Jason.Keirstead@ca.ibm.com> wrote:


Agree 100% on the nanoseconds - if not useful, they should be dropped.

I want to pick up debate here we were having on the Slack channel before it went kapoof. I do not think we should be coming at this from the point of view of "this could be theoretically useful for <x>". This is exactly how STIX got so complicated in the first place.

We should be coming at this from the point of view of

- What is the minimal amount of information to communicate this data point

- OK, now, what additional information *beyond the minimum" is required to fulfil all identified workflows.

Notice I am using the word "workflow", not use case, this is on purpose. All of these decisions should be made from the point of view of an end to end workflow - not only the producer making the data, but also the consumer of the data, and what usefulness it could provide them.

So far the requirement for a precision field has assumed that there is a use case on the recpient side for this data - I challenge this. Lets assume we have a mandatory nanosecond-accurate timestamp. What is the workflow by which I would create a timestamp that would not have nanosecond accuracy, send that to a consumer, and then have the consumer improperly process the information or take invalid action based on that? A use case on Slack was presented by @sbarnum that you could use this for high precision temporal analysis - but I assert that said analysis still does not require a precision field, because in the only use cases where you would be doing that action, the data would always have precision (no one is going to take human-generated incident responses and perform millisecond-level temporal analysis on them, that doesn't make any sense)

Jason Keirstead
Product Architect, Security Intelligence, IBM Security Systems
www.ibm.com/security | www.securityintelligence.com

Without data, all you are is just another person with an opinion - Unknown

<graycol.gif>"Struse, Richard" ---11/23/2015 03:42:45 PM---Are there any generally-available tools or technologies that produce timestamps with nanosecond prec

From: "Struse, Richard" <Richard.Struse@HQ.DHS.GOV>
To: "tony@yaanatech.com" <tony@yaanatech.com>, "Jordan, Bret" <bret.jordan@bluecoat.com>, Trey Darley <trey@SOLTRA.COM>
Cc: Jason Keirstead/CanEast/IBM@IBMCA, Jerome Athias <athiasjerome@gmail.com>, "cti-stix@lists.oasis-open.org" <cti-stix@lists.oasis-open.org>, "Wunder, John A." <jwunder@mitre.org>, Patrick Maroney <Pmaroney@Specere.org>, "Sean D. Barnum" <sbarnum@mitre.org>
Date: 11/23/2015 03:42 PM
Subject: RE: [cti-stix] STIX timestamps and ISO 8601:2000
Sent by: <cti-stix@lists.oasis-open.org>

Are there any generally-available tools or technologies that produce
timestamps with nanosecond precision today?  If we can't identify any I
would suggest that we support 6 digits (microseconds) and be done.

This is a trivial but important way that we can communicate to the broader
community that we are rooted in real-world practice.

-----Original Message-----
cti-stix@lists.oasis-open.org [mailto:cti-stix@lists.oasis-open.org]
On Behalf Of Tony Rutkowski
Sent: Monday, November 23, 2015 2:23 PM
To: Jordan, Bret; Trey Darley
Cc: Jason Keirstead; Jerome Athias;
cti-stix@lists.oasis-open.org; Wunder,
John A.; Patrick Maroney; Sean D. Barnum
Subject: Re: [cti-stix] STIX timestamps and ISO 8601:2000

It's not inconceivable that fractional microsecond
values matter in virtualization environments.within
the same facility.  On a larger scale, the uncertainties
associated with the timestamp value will make
nanosecond precision moot.

Has anyone articulated what the overhead
differential is of an _expression_ with a precision
of microseconds versus nanoseconds?


On 2015-11-23 01:08 PM, Jordan, Bret wrote:
> I miss typed in my last email, I meant to say micro seconds not
> milliseconds, aka 6 digits of precision not 3 digits of precision.
>  Wireshark and other networking / security tools are able to work with
> and provide 6 digits of precision. That is VERY common. What is not
> really common today is 9 digits of precision.




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]