OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

cti message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [cti] timestamp proposal for STIX 2.0 RC3


So I’ve been thinking a lot about this and it’s very similar to the debate where we talked about whether we should limit the max lengths that strings could be. We ended up deciding not to add that based on a ballot, and while it wasn’t an overwhelming result I think for consistency we should continue that pattern here. That would also be consistent with the ISO8601 text, which didn’t have any limits.

 

I do think that Andy’s point is good and we should provide recommendations for what you should support for string limits, max number sizes, time precisions, etc. IMO all of that should go in an implementer’s guide.

 

If this ends up biting us later we can add the limits as normative requirements in a later version. Going down this path would lead to:

 

 

2.10.  Timestamp

Type Name: timestamp

Timestamps in STIX are represented as a number of seconds since the Unix epoch (January 1, 1970) in the UTC (Coordinated Universal Time) timezone.

The JSON MTI serialization uses the JSON number type <TODO: add reference> when representing timestamp.

 

 

That text would let you represent the same dates you can represent in the current ISO8601 format, which everyone has been OK with. It would mean that different implementations may support different precision, but as the ballot showed people tend to be fine with that and, again, that’s how the old text worked so it’s not even a change.

 

Let’s have a quick informal ballot, and depending on how that goes we probably should move to a formal ballot. In order of preference, what do you think:

 

1.       Keep the old ISO8601 format, no limits on acceptable dates.

2.       Keep the old ISO8601 format, but add limits on acceptable precision and date ranges.

3.       Use this new epoch format, no limits on acceptable dates.

4.       Use this new epoch format, but add limits on acceptable precision and date ranges.

 

I have no preference on this topic. They all seem workable, I just want us to agree on something and move on.

 

John

 

On 12/7/16, 12:44 PM, "Michael Chisholm" <chisholm@mitre.org> wrote:

 

    On 12/7/2016 11:36 AM, Bret Jordan (CS) wrote:

    > So we are trying to define support up to picoseconds?  Do we really need that at this point?  Or are microseconds sufficient for now?

    >

    >

    > If we do picoseconds and someone stores this data in a float64 and that truncates the data, does that invalidate it or cause something to blow up?

    >

    >

    > I really do not want to design something that is either not going to work in code or is going to be very counter intuitive to a developer.

    >

    >

    > Picking the number  "4102462800 " seems very arbitrary to me.

    >

    >

   

    I don't think we should tell people how to write their code, including

    whether to use an IEEE-754 double-precision float representation or

    something else.

   

    Perhaps what you're really getting at is all the integral values in that

    range should be exactly representable, and perhaps a few digits to the

    right of the decimal point.  Some normative text regarding

    approximation/truncation might be in order.

   

    The same consideration could be given to all numeric types.  Every float

    could be subject to some truncation, and every number subject to

    bounds-checking by a particular implementation.  I've often thought that

    although you shouldn't mandate implementation, it's probably wise to

    mandate some minimum range/precision, to ensure a minimum level of

    interoperability.  Choosing greater range/precision could be an

    opportunity for products to distinguish themselves...?

   

    On the other hand, the "no better than picosecond precision" restriction

    is an upper bound (on neither range nor precision [in the IEEE-754

    sense], but is a bound of a sort).  Why is that upper bound needed?  Why

    not let tools express additional fractional digits if they want, and

    allow implementations to ignore excess fractional digits (up to a point)

    if they want to?

   

    Hmm... just noticed the STIX spec defines the float type as IEE 754

    double-precision numbers too.  Not sure if that is saying that

    implementations must represent floats that way, or if it thinks that's

    somehow the same thing as a real number (it says "a number with a

    fractional part").  I don't agree with either of these.  I think the

    value space should be real numbers, and the spec should probably give

    some minimum range/precision restrictions as described above.  That

    might rule out some implementations, but not force anyone into any

    particular implementations.  And the IEEE-754 representation is *not*

    the same as a "number with a fractional part".  The representation comes

    with certain consequences for range/precision, and can't represent any

    old number with a fractional part.  (The restriction on Inf and NaN is

    probably wise though.)

   

    Andy

   

    

    

    ---------------------------------------------------------------------

    To unsubscribe from this mail list, you must leave the OASIS TC that

    generates this mail.  Follow this link to all your TCs in OASIS at:

    https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php

    

    



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]