[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Different effects from order of conref and filtering processing
On the call today I said that different orders of filtering and conref resolution could not result in differences, but I was mistaken. I was remembering only one specific case (where the referencing element is not itself conditional). But rereading the Processing interoperability considerations appendix from 1.2 I see that in fact the order can result in differences when the referencing element and the referenced element are both conditional on the same property and have different applicabilities, depending on the value of the @-dita-use-conref-target attribute. In this case, it would be possible for an xref target that would be available if you did one action first to not be available if you did the other first, resulting in different address resolution results for the same content. This is, I think, an edge case (and should probably be presented as something you should avoid doing if at all possible, which was part of the point of the interoperability appendix: to let you know what things can result in differences you can avoid them if interoperation is a concern). With DITA 1.3, the “this topic” fragment identifier, key scopes, and branch filtering, there is a greater opportunity for addressing differences, because the local effective content following conref resolution could result in different keys or key targets being effective or different elements being present with the ID pointed to by a key-based element reference. But the cause is the same in all cases: conditional reference and target, so I don’t think the nature of the problem is different. This potential difference is I think unavoidable as long as we are unwilling or unable to mandate whether conref processing or filtering should happen first. I still feel strongly that the correct order is conref followed by filtering so that you avoid the case of a conref to something that is subsequently filtered out. But I understand why Robert’s users, in particular, wanted to filter first (although the reason for needing it could be largely satisfied by a more sophisticated process that filters what it safely can before conref resolution, does conref resolution on what remains, then completes filtering—but that’s easy for me to say now, 10+ years on from when Robert’s users starting doing this stuff). So I think if any action is required it is to update the interoperability appendix to add additional details about the implications for the interaction of key scopes, branch filtering, filtering, and conref resolution. Again, remember that when we talk about “mandating a processing order” what we would really be saying is “this is what the correct answer is, an answer you will be guaranteed to get if you do processes x, y, and z in this specific order”. But that does not mandate *implementation*, it simply says what the final processing result should be. Data standards need to be very careful not to specify implementation details in normative prose. It is always fine to describe sample algorithms as non-normative explanation, but the normative language has to focus on preconditions and postconditions, with the processing details left open for implementors. What the spec says (and will continue to say) for conref and filtering is that there are two allowed postconditions for the same precondition. Cheers, E. -- Eliot Kimber Senior Solutions Architect "Bringing Strategy, Content, and Technology Together" Main: 512.554.9368 www.reallysi.com www.rsuitecms.com
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]