OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

cti message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: RE: [cti] Timestamp Serialization Question

I completely agree with John. We need to remember the goals of TWIGS, one of which was Simplicity – targeting the 80%. Changing timestamps from a single value with precision to a Timerange object just makes things harder for everyone for a benefit for a small percentage of situations. If we have a particular place within STIX that requires a definitive time range then we can talk about that later, but as I see it, the proposal by John is the best for the 80%.




Terry MacDonald

Senior STIX Subject Matter Expert

SOLTRA | An FS-ISAC and DTCC Company

+61 (407) 203 206 | terry@soltra.com



From: cti@lists.oasis-open.org [mailto:cti@lists.oasis-open.org] On Behalf Of Wunder, John A.
Sent: Wednesday, 20 January 2016 11:25 PM
To: cti@lists.oasis-open.org
Subject: Re: [cti] Timestamp Serialization Question


I have to be honest, I really prefer the approach we outlined earlier (with a text precision field):

  • It’s very easy to understand and explain. We won’t have to explain to people how to do timestamps because it will be very obvious.
  • It can always be treated as a single timestamp, making it easier on consumers.

In cases where there’s often uncertainty (I.e. Incident timestamps, like Pat described) I think the approach Pat outlines is good. But those fields should always be a range, even if that range reduces to 0…otherwise consumers won't know what they’re going to get and there will be problems.


IMO fields that sometimes contain a value of one type and sometimes contain a value of another type should be avoided, especially when one or the other will be much more popular. Clients will just neglect to code support for the uncommon type and barf when they actually see it.




From: <cti@lists.oasis-open.org> on behalf of Patrick Maroney <Pmaroney@Specere.org>
Date: Wednesday, January 20, 2016 at 12:28 AM
To: "Jordan, Bret" <bret.jordan@bluecoat.com>, Chris Ricard <cricard@fsisac.us>
Cc: Eric Burger <Eric.Burger@georgetown.edu>, "cti@lists.oasis-open.org" <cti@lists.oasis-open.org>
Subject: Re: [cti] Timestamp Serialization Question


Propose consideration of a TimeStampRange [Start, End]  construct as an alternative and/or addition to the proposed TimeStamp & Precision.  


Along with the scenarios already discussed here in this thread, this could help address common Date/Time _expression_ requirements for Investigations, Incident Reporting, etc. where we may only know that some key event occurred (1) Between a given range of dates/times, (2) After a given Date/Time, or (3) Before a given Date/Time.*


  • When conveying a TimeStamp with a precision of +/- 1 Second - Use the default TimeStamp _expression_:

"initial_compromise_time" :"2015-12-09T05:11:00Z"


This form can also be used in scenarios where precision is not important:


"Forensic_Disk_Image_Shipped" :"2015-12-09T05:11:00Z"

  • When conveying a TimeStamp with a higher precision  - Use the TimeStamp Field as defined, and specify time-secfrac to the precision you wish to convey (+/-  "." 1*DIGIT")

### +/- 1 millisecond precision ###

"initial_compromise_time" : "2015-12-09T05:11:00.010Z"


### +/- 1 microsecond precision ###

"initial_compromise_time" : "2015-12-09T05:11:00.012341Z"


  • Add a TimeStampRange [Start, End]  construct  to specify the range of time when you wish to convey lesser precision or uncertainty.

### Sometime in this 24 hour period ###

"initial_compromise_time" : ["2015-12-09T05:00:00Z", "2015-12-10T05:00:00Z"] 


### Sometime before 05:00:00Z December 10th, 2015 ###

"first_data_exfiltrated_time": ["0000-00-00T00:00:00Z", "2015-12-10T05:00:00Z"] 


### Sometime after 05:00:00Z December 10th, 2015 ###

"initial_compromise_time" : ["2015-12-10T05:00:00Z", "0000-00-00T00:00:00Z"]




* Some real world  Investigations, Incident Reporting, example scenarios:

  • Organization receives an IOC (I.e., an IP Address for a known Malicious C2 channel).
  • Organization searches for matching activity, but only has a 90 Day Look-back/Retention Period.
  • Organization detects ongoing activity from a number of systems at the very beginning of their 90 Day look-back range
  • Organization only knows at this point that the related compromise occurred sometime in the past 90+ days 


  • A traveling employee returns to the office after a 30 day road trip and connects their laptop to the internal network
  • Laptop immediately begins connection attempts to the IP Address of a known Malicious C2 channel
  • Organization confirms that no malicious activity was seen from this laptop prior to employee travel.
  • Organization only knows at this point that laptop was likely compromised sometime in the 30 Day period employee/laptop were offsite.


  • Parent Organization receives intelligence that reveals a large number of geographically distributed employees were exposed to a spear phishing attack containing a new 0Day.
  • Systems are managed by different local IT Organizations. Parent Organization develops a mitigation action plan for determining if the Employee opened the attachment. Action Plans include additional steps for the immediate isolation of exposed systems from networks, , and the seizure,  forensics imaging, and rebuild/replacement of all verified compromised assets.
  • Parent Organization instructs all downstream IT Organizations to (1) notify employees to immediately delete the email if unopened, (2) determine if the employee opened the attachment,  (3) isolate exposed/compromised systems, (4)  positively confirm isolation, and (5)  coordinate asset capture/replacement.
  • Many downstream Organizations have key employees with compromised laptops who are traveling (domestically and internationally) requiring additional logistical risk assessment/capture/replacement processes.
  • Internal/External Compliance Policies mandate reporting of all key remediation and mitigation actions.  Multiple ranges are required to convey/track/report on multiple processes, key milestones, etc.  (I.e., Division X Systems Isolated, Division Y Employees Notified, Forensics Images Received).

As investigations proceed, employee interviews are conducted, event logs, network and host based forensics evidence are collected/analyzed.  Over time an increasingly accurate timeline is constructed. However, depending on the incident scope, complexity, duration, and availability/quality of forensics evidence - some key facts may never be established.  

Patrick Maroney

Office:  (856)983-0001

Cell:      (609)841-5104




Integrated Networking Technologies, Inc.

PO Box 569

Marlton, NJ 08053


From: "cti@lists.oasis-open.org" <cti@lists.oasis-open.org> on behalf of Bret Jordan <bret.jordan@bluecoat.com>
Date: Wednesday, January 20, 2016 at 12:09 AM
To: Chris Ricard <cricard@fsisac.us>
Cc: Eric Burger <Eric.Burger@georgetown.edu>, "cti@lists.oasis-open.org" <cti@lists.oasis-open.org>
Subject: Re: [cti] Timestamp Serialization Question


I think what you are calling out, represents a lot of the way people think of things.  For example, if you know the event happened it December 2015, but you were not sure of the day, then you would probably do:


timestamp = 2015-12-00T00:00:00Z 

precision = month


But that can be interpreted as 2015-11 - 2016-01


The other problem with precision is how do you say, I know it happened the first few days of January around noon.  









Bret Jordan CISSP

Director of Security Architecture and Standards | Office of the CTO

Blue Coat Systems

PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050

"Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg." 


On Jan 19, 2016, at 20:42, Chris Ricard <cricard@fsisac.us> wrote:


<typing with my analyst hat on>


If I say “an incident occurred on Tuesday Jan 19”, I mean it occurred on Tuesday, between midnight and midnight.  I would represent this as:


Timestamp: 2016-01-19T00:00:00Z

Timestamp_precision: day


If I am reporting details on phishing campaigns reported during December 2015, I mean those reported between midnight Dec 1, 2015 through midnight Jan 1, 2016.  I would represent this as:


Timestamp: 2015-12-01T00:00:00Z

Timestamp_precision: month


This would be the equivalent of the following SQL (or something similar without whatever syntax errors I unwittingly included):


Select * from tbl_incidents where incidentDateTime > 2015-12-01 and incidentDateTime < 2016-01-01


Hope this makes sense,


Chris Ricard




From: Eric Burger
Sent: Tuesday, January 19, 2016 9:48 PM
To: cti@lists.oasis-open.org
Subject: Re: [cti] Timestamp Serialization Question


This is another violent agreement, “yes, and” situations.


Yes, this is how the data gets generated in the wild.


The problem is that unless we put our foot down and chose whether the time is the midpoint of the bucket or the bottom of the bucket, the consumer HAS NO CLUE what the bucket is. It is really trivial if you wrote the producer and the consumer: they both will encode your world view. It is really hard for a multivendor solution to have the same interpretation of what the bucket is unless we specify it here.


I really do NOT want to add yet another timestamp parameter. Precision is bad enough. “Error bars” or “count from the bottom” or “count from the middle” is really ugly. I would put the onus on the client and specify one and only one way to express time stamps.


On Jan 19, 2016, at 9:41 PM, Jason Keirstead <Jason.Keirstead@ca.ibm.com> wrote:


So, we could do it that way - which would require the producer to take the equivalent of 100% of their known precision and adjust their timestamps downward accordingly. I would argue strongly though that this is pretty much *never* how this is done in industry and would result in confusion. Normally the onus is on the consumer of information to interpret the producers information as they see fit when they know the precision.

Here is the difference:

- If I follow the specification below, and I read the time 12:00:00 off the clock and know my precision to be minute-level, then I would have to supply a timestamp of 11:59:00 with a precision of 1 minute ( note here the importance, that minute-level precision is not the same as 60 second precision - it actually requires a 2x the confidence interval time-boxing - this is important!). The consumer would then take that information and know "OK the time starts at 11:59:00 and ends between then at 12:01:00"

- The way it is normally done instead, is the producer of the time-sensitive information just sends whatever time came off of their information producing source. The consumer of that information then constructs the time-box around whatever rules they see fit. To carry forward the above example, the producer would send me 12:00:00 with 1 minute precision, and I would know implicitly, if I care about this at all, that that event could have occurred any time between 11:59:00 and 12:01:00.

I think that the second method is how pretty much all systems behave. I have never known a system to behave the first way.

Jason Keirstead
Product Architect, Security Intelligence, IBM Security Systems
www.ibm.com/security | www.securityintelligence.com

Without data, all you are is just another person with an opinion - Unknown 

Eric Burger ---01/19/2016 10:06:48 PM---I would offer the important precision is not not hours, minutes, or seconds, but number of seconds.

From: Eric Burger <Eric.Burger@georgetown.edu>
To: cti@lists.oasis-open.org
Date: 01/19/2016 10:06 PM
Subject: Re: [cti] Timestamp Serialization Question
Sent by: <cti@lists.oasis-open.org>


I would offer the important precision is not not hours, minutes, or seconds, but number of seconds. We also need to define whether the timestamp represents the middle of the range or the bottom.

For example, if I only transmit the hour portion of the timestamp of an event, then 12:00:00Z means anything from 12:00:00.000000000 to 12:59:59.999999999. However, if I transmit the closest hour portion of the timestamp of an event, then 12:00:00Z means anything from 11:30:00.000000000 to 12:29:59.9999999999.

Note that a typical data collection window is tenths of minutes. That is six seconds, not ‘seconds.’ I.e. 12:00:00Z means either 12:00:00Z - 12:00:05.999999999 or 11:59:57Z - 12:00:02.999999999.

My suggestion is (1) the timestamp represents the bottom of the range of the bucket and (2) the number is that for precision of less than a second (i.e., granularity of more than a second) is a bucket of seconds starting from the timestamp value with a precision number of seconds. So, examples would be:

12:00:00Z = event happened between 12:00:00Z - 12:00:00.9999999999Z [default precision = 1s]
12:00:00Z (precision = 60s) = event happened between 12:00:00Z - 12:00:59Z [formerly known as “minute precision”]
12:00:00Z (precision = 3600s) = event happened between 12:00:00Z - 12:59:59Z [formerly known as “hour precision”]
12:00:00Z (precision = 6s) = event happened between 12:00:00Z - 12:00:05.999999999Z [how else would you specify “tenth of a minute”?]

On Jan 19, 2016, at 8:51 PM, Jason Keirstead <Jason.Keirstead@ca.ibm.com> wrote:

The use case as I understand it at a high level is so that when someone submits a timestamp of 12:00:00 zulu, we know the difference between if they truely mean exactly 12:00:00 on the button, or if they only have second level precision available to them. And this is required because we aren't mandating a fixed format, but RFC 3339 which is variable.

Eric Burger --- Re: [cti] Timestamp Serialization Question ---


"Eric Burger" <Eric.Burger@georgetown.edu>



Tue, Jan 19, 2016 9:30 PM


Re: [cti] Timestamp Serialization Question


I’m still clueless as to the use case.

Not a negative statement, but I would like to see the concise reason we need ‘precision’ before I weigh in, if at all.

On Jan 19, 2016, at 3:22 PM, Jordan, Bret <bret.jordan@BLUECOAT.COM> wrote:

There tends to be two options for dealing with objects that have multiple timestamps and their corresponding precision. Sean and I have been talking through the pros and cons of these. We would like to get everyone's opinion. Which do you prefer, option 1 or option 2

Option 1:
This option put the burden on the JSON serialization format to add an extra "_precision" field to each timestamp enabled field. This is a much flatter and easier to parse and process representation, but the con is it requires unique field names.
"type": "incident",
"initial_compromise_time" : "2015-12-07T22:00:00Z",
"initial_compromise_time_precision": "hour",
"first_data_exfiltrated_time" : "2015-12-09T05:11:00Z",
"first_data_exfiltrated_time_precision" : "minute",
"incident_opened_time" : "2016-01-15T11:19:22Z",
"incident_closed_time" : "2016-01-19T17:24:017Z"

Option 2:
This option will require a nested object and struct to store this data and will have an extra layer of indirection for all of those times when the timestamp is at the default precision. 
"type": "incident",
"initial_compromise_time" : {
"timestamp": "2015-12-07T22:00:00Z",
"timestamp_precision": "hour"
"first_data_exfiltrated_time" : { 
"timestamp": "2015-12-09T05:11:00Z",
"timestamp_precision" : "minute"
"incident_opened_time" : {
"timestamp": "2016-01-15T11:19:22Z"
"incident_closed_time" : {
"timestamp": "2016-01-19T17:24:017Z"



Bret Jordan CISSP
Director of Security Architecture and Standards | Office of the CTO
Blue Coat Systems
PGP Fingerprint: 63B4 FC53 680A 6B7D 1447 F2C0 74F8 ACAE 7415 0050
"Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg."


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]