[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

*Subject*: **[OASIS Issue Tracker] (ODATA-789) Primitive type Edm.Decimal is ill-defined in regard to Precision**

*From*:**OASIS Issues Tracker <workgroup_mailer@lists.oasis-open.org>***To*: odata@lists.oasis-open.org*Date*: Thu, 26 Feb 2015 21:28:55 +0000 (UTC)

[ https://issues.oasis-open.org/browse/ODATA-789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evan Ireland updated ODATA-789: ------------------------------- Description: (Relates to ODATA-771) A quick recap of relevant text from the spec: 4.4 Primitive Types ... Edm.Decimal ... Numeric values with fixed precision and scale 6.2.3 Attribute Precision ... For a decimal property the value of this attribute specifies the maximum number of digits allowed in the property’s value; it MUST be a positive integer. If no value is specified, the decimal property has unspecified precision. ... Note: service designers SHOULD be aware that some clients are unable to support a precision greater than 29 for decimal properties and 7 for temporal properties. Client developers MUST be aware of the potential for data loss when round-tripping values of greater precision. Updating via PATCH and exclusively specifying modified properties will reduce the risk for unintended data loss. First problem ============= PROBLEM: By section 4.4, Edm.Decimal values have "fixed" precision. But later we allow that it may be "unspecified", so it is not "fixed". We might interpret this as: the precision can be "fixed" in a CSDL Property facet, or it may be "fixed" at runtime, i.e. in a sent or received value (according to the number of significant digits in the runtime value). But perhaps it indicates the wording in section 4.4 is inappropriate. Second problem ============== The section 6.2.3 text suggests that precision greater than 29 could result in "data loss during round-tripping". For temporal properties that is reasonable, because it is suggestive that precision greater than 7 in the fractional seconds may result in truncation/rounding (losing some significant digits in the fraction, but not preventing receipt/storage of the value by the client). However for decimal values, it could result in the inability of the client to even represent the value, let alone retain significant digits. Try this out (C#): decimal x = decimal.Parse("123456789012345678901234567890"); PROBLEM: You don't get a loss of significant digits - the value just cannot be represented. Furthermore, even a precision of 29 is too much for C# decimal type, consider "99999999999999999999999999999" (28 9 digits). That particular type has a binary mantissa, so the maximum representable value is between 28 and 29 decimal digits (about 7.8*10^28). Third problem ============= One might expect that when we talk about Precision in the CSDL spec, we are talking about the type akin to defining its Value Space (see http://www.w3.org/TR/xmlschema11-2/#value-space), not about its Lexical Space (since we have ATOM/JSON/ABNF docs to cover lexical representation). From Wikipedia: http://en.wikipedia.org/wiki/Significant_figures#Identifying_significant_figures The significant figures of a number are those digits that carry meaning contributing to its precision. This includes all digits except: • All leading zeros; • Trailing zeros when they are merely placeholders to indicate the scale of the number (exact rules are explained at identifying significant figures); and • Spurious digits introduced, for example, by calculations carried out to greater precision than that of the original data, or measurements reported to a greater precision than the equipment supports. From Computational Mathematics (by T.R.F. Nonweiler): http://www.jstor.org/discover/10.2307/2008016?sid=21105465034221&uid=2&uid=4&uid=3738776 • The maximum number of digits available to the mantissa is called the precision, or number of significant digits. So you will see that both references equate precision with significant digits. (Not to be confused with the notion of precision as it applies to positions after the decimal point, which for Edm.Decimal is the Scale). Now CSDL 6.2.3 allows for "unspecified precision". We might then reasonably assume that Edm.Decimal can thus accommodate values with arbitrary precision, i.e. an arbitrary number of significant digits. For example, as with java.math.BigDecimal. PROBLEM: Folks seem happy to accept that IEEE decimal floating point (64-bit and 128-bit) is compatible with CSDL Precision greater than 16 (64-bit) or 34 (128-bit). That indicates that we are not in agreement that CSDL Precision (as it relates to Edm.Decimal) is to do with significant digits. (The alternative is to allow that DECFLOAT in Edm.Decimal can have negative scale, but that is prohibited by the CSDL spec in section 6.2.4). was: (Relates to ODATA-771) A quick recap of relevant text from the spec: 4.4 Primitive Types ... Edm.Decimal ... Numeric values with fixed precision and scale 6.2.3 Attribute Precision ... For a decimal property the value of this attribute specifies the maximum number of digits allowed in the property’s value; it MUST be a positive integer. If no value is specified, the decimal property has unspecified precision. ... Note: service designers SHOULD be aware that some clients are unable to support a precision greater than 29 for decimal properties and 7 for temporal properties. Client developers MUST be aware of the potential for data loss when round-tripping values of greater precision. Updating via PATCH and exclusively specifying modified properties will reduce the risk for unintended data loss. First problem ============= PROBLEM: By section 4.4, Edm.Decimal values have "fixed" precision. But later we allow that it may be "unspecified", so it is not "fixed". We might interpret this as: the precision can be "fixed" in a CSDL Property facet, or it may be "fixed" at runtime, i.e. in a sent or received value (according to the number of significant digits in the runtime value). But perhaps it indicates the wording in section 4.4 is inappropriate. Second problem ============== The section 6.2.3 text suggests that precision greater than 29 could result in "data loss during round-tripping". For temporal properties that is reasonable, because it is suggestive that precision greater than 7 in the fractional seconds may result in truncation/rounding (losing some significant digits in the fraction, but not preventing receipt/storage of the value by the client). However for decimal values, it could result in the inability of the client to even represent the value, let alone retain significant digits. Try this out (C#): decimal x = decimal.Parse("123456789012345678901234567890"); PROBLEM: You don't get a loss of significant digits - the value just cannot be represented. Furthermore, even a precision of 29 is too much for C# decimal type, consider "99999999999999999999999999999" (28 9 digits). That particular type has a binary mantissa, so the maximum representable value is between 28 and 29 decimal digits (about 7.8*10^28). Third problem ============= One might expect that when we talk about Precision in the CSDL spec, we are talking about the type akin to defining its Value Space (see http://www.w3.org/TR/xmlschema11-2/#value-space), not about its Lexical Space (since we have ATOM/JSON/ABNF docs to cover lexical representation). From Wikipedia: http://en.wikipedia.org/wiki/Significant_figures#Identifying_significant_figures The significant figures of a number are those digits that carry meaning contributing to its precision. This includes all digits except: • All leading zeros; • Trailing zeros when they are merely placeholders to indicate the scale of the number (exact rules are explained at identifying significant figures); and • Spurious digits introduced, for example, by calculations carried out to greater precision than that of the original data, or measurements reported to a greater precision than the equipment supports. From Computational Mathematics (by T.R.F. Nonweiler): http://www.jstor.org/discover/10.2307/2008016?sid=21105465034221&uid=2&uid=4&uid=3738776 • The maximum number of digits available to the mantissa is called the precision, or number of significant digits. So you will see that both references equate precision with significant digits. (Not to be confused with the notion of precision as it applies to positions after the decimal point, which for Edm.Decimal is the Scale). Now CSDL 6.2.3 allows for "unspecified precision". We might then reasonably assume that Edm.Decimal can thus accomodate values with arbitrary precision, i.e. an arbitrary number of significant digits. For example, as with java.math.BigDecimal. PROBLEM: Folks seem happy to accept that IEEE decimal floating point (64-bit and 128-bit) is compatible with CSDL Precision greater than 16 (64-bit) or 34 (128-bit). That indicates that we are not in agreement that CSDL Precision (as it relates to Edm.Decimal) is to do with significant digits. (The alternative is to allow that DECFLOAT in Edm.Decimal can have negative scale, but that is prohibited by the CSDL spec in section 6.2.4). Proposal ======== Make it clear that CSDL Precision for Edm.Decimal is equated with significant digits. Furthermore, since (unlike with temporal types) some clients will be unable to represent Edm.Decimal values with more than 28 digits (not just losing significant digits, but completely unable to represent). Not the 29 that we state currently. (C# doesn't rule the world (yet) but is important to accomodate). Require that any client or server which can represent a value of type Edm.Decimal must preserve all significant digits. C# decimal is fine for this, it just can't represent large values. java.math.BigDecimal is fine too. But IEEE DECFLOAT is not acceptable except when the Precision is > 16 (for 64-bit) or > 34 (for 128-bit). For example, 9.9e6144 is not representable using DECFLOAT for Edm.Decimal, as IEEE DECFLOAT cannot retain 6145 significant digits. > Primitive type Edm.Decimal is ill-defined in regard to Precision > ---------------------------------------------------------------- > > Key: ODATA-789 > URL: https://issues.oasis-open.org/browse/ODATA-789 > Project: OASIS Open Data Protocol (OData) TC > Issue Type: Improvement > Components: OData CSDL > Affects Versions: V4.0_OS > Reporter: Evan Ireland > Fix For: V4.0_ERRATA03 > > > (Relates to ODATA-771) > A quick recap of relevant text from the spec: > 4.4 Primitive Types ... Edm.Decimal ... Numeric values with fixed precision and scale > > 6.2.3 Attribute Precision ... For a decimal property the value of this attribute specifies the maximum number of digits allowed in the property’s value; it MUST be a positive integer. If no value is specified, the decimal property has unspecified precision. > ... Note: service designers SHOULD be aware that some clients are unable to support a precision greater than 29 for decimal properties and 7 for temporal properties. Client developers MUST be aware of the potential for data loss when round-tripping values of greater precision. Updating via PATCH and exclusively specifying modified properties will reduce the risk for unintended data loss. > First problem > ============= > PROBLEM: By section 4.4, Edm.Decimal values have "fixed" precision. But later we allow that it may be "unspecified", so it is not "fixed". We might interpret this as: the precision can be "fixed" in a CSDL Property facet, or it may be "fixed" at runtime, i.e. in a sent or received value (according to the number of significant digits in the runtime value). But perhaps it indicates the wording in section 4.4 is inappropriate. > Second problem > ============== > The section 6.2.3 text suggests that precision greater than 29 could result in "data loss during round-tripping". > For temporal properties that is reasonable, because it is suggestive that precision greater than 7 in the fractional seconds may result in truncation/rounding (losing some significant digits in the fraction, but not preventing receipt/storage of the value by the client). > However for decimal values, it could result in the inability of the client to even represent the value, let alone retain significant digits. > Try this out (C#): > decimal x = decimal.Parse("123456789012345678901234567890"); > PROBLEM: You don't get a loss of significant digits - the value just cannot be represented. Furthermore, even a precision of 29 is too much for C# decimal type, consider "99999999999999999999999999999" (28 9 digits). That particular type has a binary mantissa, so the maximum representable value is between 28 and 29 decimal digits (about 7.8*10^28). > Third problem > ============= > One might expect that when we talk about Precision in the CSDL spec, we are talking about the type akin to defining its Value Space (see http://www.w3.org/TR/xmlschema11-2/#value-space), not about its Lexical Space (since we have ATOM/JSON/ABNF docs to cover lexical representation). > From Wikipedia: http://en.wikipedia.org/wiki/Significant_figures#Identifying_significant_figures > The significant figures of a number are those digits that carry meaning contributing to its precision. This includes all digits except: > • All leading zeros; > • Trailing zeros when they are merely placeholders to indicate the scale of the number (exact rules are explained at identifying significant figures); and > • Spurious digits introduced, for example, by calculations carried out to greater precision than that of the original data, or measurements reported to a greater precision than the equipment supports. > From Computational Mathematics (by T.R.F. Nonweiler): http://www.jstor.org/discover/10.2307/2008016?sid=21105465034221&uid=2&uid=4&uid=3738776 > • The maximum number of digits available to the mantissa is called the precision, or number of significant digits. > So you will see that both references equate precision with significant digits. (Not to be confused with the notion of precision as it applies to positions after the decimal point, which for Edm.Decimal is the Scale). > Now CSDL 6.2.3 allows for "unspecified precision". We might then reasonably assume that Edm.Decimal can thus accommodate values with arbitrary precision, i.e. an arbitrary number of significant digits. For example, as with java.math.BigDecimal. > PROBLEM: Folks seem happy to accept that IEEE decimal floating point (64-bit and 128-bit) is compatible with CSDL Precision greater than 16 (64-bit) or 34 (128-bit). That indicates that we are not in agreement that CSDL Precision (as it relates to Edm.Decimal) is to do with significant digits. (The alternative is to allow that DECFLOAT in Edm.Decimal can have negative scale, but that is prohibited by the CSDL spec in section 6.2.4). -- This message was sent by Atlassian JIRA (v6.2.2#6258)

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]