dita message
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
| [List Home]
Subject: Re: [dita] The value of re-use and interchange
- From: Michael Priestley <mpriestl@ca.ibm.com>
- To: "W. Eliot Kimber" <ekimber@innodata-isogen.com>
- Date: Tue, 7 Dec 2004 16:27:00 -0500
Eliot writes:
>I want to make sure we're being realistic about
what the relative value
>is of different types of re-use and interchange because I think some
of
>the values are being or may be overstated or overvalued. But that could
>just be my jaded view of the world.
>For example, consider the effort involved in creating a DITA map over
a
>repository of several thousand content objects. Even with sophisticated
>authoring tools it's a significant conceptual challenge that many
>technical writers are simply not prepared for or willing to take on.
There are many products already out there with many
thousands of topics each, all managed via maps. This includes reuse of
content across multiple products.
Typically it's managed as one set of maps per component,
plus additional maps per product, plus additional maps per solution. Each
individual map covers ca. 20-100 topics. Just as chunking content into
topics as good, you also need to chunk your model once it grows beyond
a certain point.
>This means that you likely have to hire, train, support, and retain
>highly skilled information developers to create and maintain your maps.
>Good for the skilled writers but an additional cost to the organization
>when you could have had less direct re-use but less skilled (but equally
>effective) writers. That is, sometimes the less sophisticated, brute
>force approach to document creation and management is the better
>business decision even though it's less elegant technologically.
It took about two days of training to get writers
up-to-speed with maps.
>- Link management in the context of modular information systems is
a
>challenge that requires significant investment in information management
>infrastructure. There are, to date, no commercial tools that, in my
>opinion, satisfy this requirement, especially in the context of long
>product life cycles with lots of revision. I think I know *how* to
solve
>this problem, and we (ISOGEN) have published our ideas and urged anyone
>who wants to to implement them, but to date nobody has (we did but
for
>business reasons have been unable to market that code).
Maps did the trick for us.
>- Content interchange *within* enterprises is generally much more
>valuable than interchange *across* enterprises. That is, I can get
a lot
>of value interchanging content between the product group, the training
>group, and sales group, but much less value interchanging between myself
>and my print engine supplier, for the simple reasons that the cost
of
>enabling that cross-enterprise interchange is high, the actual volume
of
>data interchanged is low, and the interchanged data will likely need
>local re-authoring anyway.
Context is everything. I agree reuse throughout the
lifecycle is important, but look at eclipse.org for just one example of
many different companies reusing each other's code and content. Customers
want solutions, and if the software is integrated then the docs need to
be too.
>In practice it's easier to do interchange via
>transformation than by standardization across enterprise boundaries
>except where volumes are high or there's some other non-typical
>requirement that demands standardization. Implementing transforms is
>relatively cheap relative to the cost of defining, implementing, and
>enforcing interchange standards.
Your statement would be true if we didn't have specialization.
That changes the picture entirely.
....
>So while code re-use is always valuable, I find
it's value relative to
>other values and costs to generally be non-compelling, simply because
it
>tends to have diminishing value as a given system becomes more
>sophisticated and more specialized. The place where I find code re-use
>most valuable is in the implementation of core generic semantics, like
>link address resolution, transclusion resolution, and so on, all of
>which are (or can be) completely generic and independent of specific
>content semantics. For example, I've only ever written XPath resolution
>in XSLT once, but I've written templates to format chapters dozens
of
>different ways.
You speak as a professional system implementer. For
any given doc group, programming skills of any kind are likely to be in
short supply. We can do a quick poll of the group to confirm my anecdotal
impression, but I'm betting most groups currently using or implementing
DITA reused existing DITA processing.
>If the question is "create an enterprise-specific document type
or
>re-use the production tools for standard doctype X" it's not even
an
>issue--the enterprise-specific document type wins every time because
>satisfying the enterprise's information capture and representation
>requirements are almost always the most important thing (and always
are
>if the time scope of the system is anything more than a year or two).
Hence the modularization rules for XSLT reuse in the
DITA spec. You can have reuse of processing and have enterprise-specific
rules, once again that is one of the main points of specialization.
>So to summarize, I think that it is easy to oversell and overvalue
the
>following:
>
>- cross-enterprise standards-based interchange of content (that is,
>interchange in terms of a standardized document type, rather than by
>transformation).
>
>- wide-scope re-use of content modules.
>
>- re-use of existing code, especially for rendition
>
>It is this experience and analysis that causes me to focus much more
on
>the core infrastructure aspects of something like DITA, i.e., the
>specialization mechanism and the general shape of the base types, than
>on code of the moment or issues of cross-enterprise interchange.
I appreciate the depth of experience you bring to
this forum. We obviously have fairly deep experience on our side as well.
I hope you appreciate why we are coming to the table with the position
we have. The problems we are solving are real, are substantial, and are
shared across the industry. IBM is not alone in needing to partner on customer
solutions, on needing to refactor products for different markets, and in
needing to provide development support for writers on a budget. Keep in
mind that consultants only get invited into situations where management
is willing to spend money on a solution - everywhere else, we need to find
the most efficient solution within our means - reinventing from scratch
each time simply is not feasible.
DITA is a way to manage escalating requirements without
equivalently escalating costs, and our experience has proven that it works.
The lower value you place on this reflects your situation, but it does
not reflect ours. I'd welcome some reports from the rest of the TC:
- Are you reusing or planning to reuse DITA design
modules?
(eg in your own specializations)
- Are you reusing or planning to reuse content across
specializations?
(eg with maps that point to content originating from
different authoring groups)
- Are you reusing or planning to reuse code from the
DITA package?
(eg a customized PDF or HTML output that reuses existing
XSLT modules)
Michael Priestley
mpriestl@ca.ibm.com
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
| [List Home]