|These changes address the issue with integers.|
The addition of INF/-INF for date types does introduce a new ambiguity in the primitiveLiteral syntax though.
If I parse the value INF in an _expression_ I don’t know if it is meant to be a numeric or a Date (or DateTimeOffset) type. Numeric types can all be cast to each other (within limits) so it doesn’t actually matter but in the proposed 4.01 my parser is likely to parse INF as doubleValue (or decimalValue now) and then, when attempting to use it in a context where a DateTimeOffset is required, the cast will fail and result in NULL.
The guidance here is interesting on whether to attempt INF/-INF with dates (look past the accepted answer too):
PostgreSQL does use Infinity for both date types but (curiously) these special values appear to be timestamps internally (and will cast to Date as required) whereas the equivalent numeric values must be quoted (as strings) and presumably rely on cast from string to numeric.
It’s a tight corner! The trouble is that primitiveLiteral has set out to enable numeric types and date types to be parsable without any special quoting which rules out any common representations (of non-castable types).
Given that you have maxdatetime and mindatetime as functions are you now going to have maxdatetime() return INF? Or is there now a value of DateTimeOffset that compares greater than maxdatetime()?
Referring to the SO thread, it feels like you are trying to implement both patterns at once. If you’re committed to maxdatetime I suggest sticking with that. Otherwise, promote maxdatetime/mindatetime to being a special value (of type DateTimeOffset) instead of being a function. You will need maxdate and mindate too.
Can you please check whether this addresses your concerns, and provide feedback on the odata-tc mailing list?
Comment on: 4.01 Committee Specification Draft 02 / Public Review Draft 02
One of the new provisions in 4.01 is:
All numeric types allow the special numeric values ‑INF, INF, and NaN
This feels onerous given typical technologies for storing integer numeric types. The ABNF for values of Int16, for example, now reads:
int16Value = [ SIGN ] 1*5DIGIT / nanInfinity ; numbers in the range from -32768 to 32767
Clearly this now requires MORE than 16 bits to store.
Given that the numeric integer types are typically used as keys for entities there are some odd implications too. Clearly the definitions of INF and -INF can still work as they compare equal to themselves and top/tail the value range when ordering. I’m more concerned about NaN which doesn’t compare equal to itself, this is likely to cause issues if it is used as the value of an entity key. (I believe PostgreSQL allows these values as keys but treats Nan == Nan as true and Nan > x as true for all x != Nan, this goes against IEEE but seems necessary if they are to be used in context where ordering is required.)
To use XML schema vocab, it feels like the abstract values and lexical representations have become muddled. The desire to use special values in JSON representations of integers has serious implications for the abstract value space.
The ‘special’ values INF, -INF and NaN are actually values in the abstract value spaces of Single and Double types (as per IEEE, extendable to Decimal). They are not universal special (numeric) values with a status similar to ‘null’. It feels like the language of the specification is sliding toward treating all of these in a similar way. Witness the definition of equals in Part 2 §184.108.40.206.1
Each of the special values null -INF, and INF is equal to itself, and only to itself.
The special value NaN is not equal to anything, even to itself.
truncation of infinity is still infinity, and infinity cannot be represented in int (I hope there's no question about this part)
My proposal is to clarify that the ‘special' numeric values are values of type Single, Double or Decimal only. You already have sufficient type promotion to resolve any ambiguity when parsing the literals. For example, the less contentious promotion of the constant parsed from the string “42” to any of the numeric types is already accepted without the special modifiers used in OData 2/3.
The second part of my proposal is that if the abstract result of an operation cannot be represented in the return type defined for that operation then the result should be the special value null. This would include the special case of integer division by zero (where 4.0 says the request fails) but also covers overflow of integers in other operations. I sense the panel dislikes failing requests so my proposal fixes that by silently carrying on - the alternative is to continue to raise an exception or put in some more general provision that says that an _expression_ that fails with a division by zero is treat as null (WITHOUT continuing the computation). The latter version has less of an impact on existing behaviour in 4.0 I guess so might be worth considering.
The NULLIF pattern for treat division by 0 as null is fairly widespread though and it is more consistent with the definition of cast:
Numeric primitive types are cast to each other with appropriate rounding. The cast fails if the integer
part doesn't fit into target type.
If the cast fails, the cast function returns null.
The idea of doing general type promotion to the largest integer type makes sense to me, if the result overflows Int64 then you’d get null (according to my proposal) which is freely castable to any of the integer types.
But I propose that Single operations that overflow Single results should be promoted to Double in a similar way. The definition of cast will need to be modified to allow for the cast of a Double to Single with overflow resulting in INF (not a failed cast). The proposed difference between the way integers and floating point numbers behave in this respect is precisely because the latter can represent INF.