[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: [dita] Re: Comparison between DITA and S1000D
john_hunt@us.ibm.com wrote: > > Erik said: > <<More generally, the DITA TC should recommend as a preferred approach > that human-readable topic content be modelled on a single type > hierarchy. Of course, other kinds of content (for instance, invoice > data) would be a distinct type hierarchy.>> > > Yes, I agree. If there's potential to S1000D in adopting the DITA > architecture, then the DITA-ized S1000D would develop a type hierarchy > with a base type. The question then becomes, why not start with the DITA > base type? If not the DITA base type, then what's needed in the DITA > base type to make it work? > > The advantages that ensue from a common base type are significant. It's > this "specialization with a fallback" that enables much of the power of > DITA's topic-based reuse model, and which distinguishes it from other > approaches. It's what makes it possible to say that with DITA, it's > possible to exchange, integrate, and reuse content across disparate > information domains, such as IT, pharmeceutical, military, aviation, > telecommunications, etc. This is one key, clear advantage DITA brings to > the table. I'd hesitate to weaken it by splitting the DITA architecture > model from its typing hierarchy and the common base it provides. This is a laudable goal but I think that it's important to keep a couple of things in mind: 1. The DITA modules as currently defined are not suitable as the base for this sort of very wide use as the underpinnings for technical documentation. This is because the current modules are too narrow in their constraints. For example, none of the DTDs I use in my daily work for creating technical documents can be directly derived from DITA because I use (and want) more levels of containment than DITA can provide for. This will be true for almost any DTD I have had a hand in designing. 2. The actual value for interchange of an architecture that is sufficiently general to actually be the underpinning of most or all technical documentation DTDs is questionable--any DTD that general will also be able to offer limited value in terms of default presentation behavior, clear semantics, and so on. My experience based on trying to enable interchange and interoperation of markup-based documents at large scales over the last 15 years is that we, as a community, tend to overvalue interchange and undervalue meeting of local requirements, largely overestimating the amount of re-use and interchange that will (or can) actually happen. Part of this may be because the technology was simply not there to allow truely wide-scope interchange but I think it has more to do with the fact that the actual cost of enabling and doing interchange is hight enough that business requirements and realities tend to change before enabling systems can be either built and deployed or used long enough to realize their benefits, which are, by necessity, long-term. This is not to say that interchange is not valuable or possible, just that as the intended scope of interchange increases the cost rises dramatically. Architectural mechanisms like DITA make it *possible* to do wide-scope interchange but they don't necessarily significantly reduce the cost because most of the cost factors are from infrastructure and management, not from the act of interchange itself. Or said another way, something like DITA makes what was impossible possible but it doesn't necessarily make the expensive cheap. This is my experience. For example, in safety-critical applications such as aircraft maintenance information, the importance of information correctness is so high that it must be carefully reviewed and checked at every stage. So even when information can be transparently interchanged between organizations it must still be inspected by humans. This inspection requires lots of human effort, whose cost far outstrips any cost that would occrue from not having to transform or re-author the data, such that any effort made to eliminate transforms or re-authoring, while measurable, often ends up being over-optimization. Note also that I'm not saying that there's anything wrong with the DITA modules as they exist--clearly they are very useful within the scope of application for which they were defined. But they are not anywhere near universal. I think it is the rare use case where data is actually interchanged in volume across industries to the point where, for example, S1000D content would need to be re-used in a pure-DITA environment. Also, I think it's easy to overstate the cost of transforms relative to the cost of defining, maintaining, and applying specifications of wide scope. That is, when the structural and semantic differences between the source and target are small, as they tend to be with technical documents, it is almost always less expensive to create local transform-based solutions for interchange than to re-engineer either the source or the target (or both) in order to enable direct interchange. For example, a general S1000D-to-DITA transform could be implemented by one person in a matter of hours. It would be a monumental undertaking to ask the entire S1000D community to re-engineer their entire XML environment--DTDs, document collections, internalized human experience, policies, processors, and so on--to move to a DITA-based solution. Of course, if S1000D is just a specification with no real users yet, it's a different story and making it DITA-based *might* be compelling if a DITA-based document type can otherwise meet the local requirements of that community (which is not proven until tried). Cheers, E. -- W. Eliot Kimber Professional Services Innodata Isogen 9390 Research Blvd, #410 Austin, TX 78759 (512) 372-8122 eliot@innodata-isogen.com www.innodata-isogen.com
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]