Best Practice for Leveraging Legacy Translation Memory when Migrating to DITA
Many organizations have previously translated content that was authored in non-DITA tools (such as Word and FrameMaker). When migrating their legacy content into the new DITA authoring environment, what does the organization do about their legacy translation memory? This legacy translation memory (TM) was created with large financial investment that can't easily be thrown away simply because a new authoring architecture is being adopted.
This article describes best practices that will help organizations to use their legacy TM for future translation projects that are authored in DITA, in order to minimize the expense of translating DITA-based content.
Before we get into the details, let's define the terms used in the localization industry so that subsequent sections will be better understood.
Computer Aided Translation, which helps the translator translate the source content. CAT tools usually leverage Translation Memory to match sentences and inline phrases that were previously translated. In addition, some CAT tools use Machine Translation to translate glossary and other company-specific terms (extracted from a terminology database).
The level of accuracy with which CAT tools can match content being translated to the TM. The levels of matching are defined as follows:
- Fuzzy matching
The source segment being matched is similar, but not identical to, the source language segment in the TM.
- Leveraged matching
The source segment being matched is identical to the matched segment, but the context is not known.
- Exact matching
The source segment being matched is identical to the matched segment and comes from exactly the same context.
Machine Translation is a technology that translates content directly from source without human intervention. Used in isolation, MT usually generates an unusable translation. However, when integrated into a CAT tool to translate specific terminology, MT is a useful technology.
Translation Memory is a technology that reuses translations previously stored in the database used by the translation tool. TM preserves the translation output for reuse with subsequent translations.
Translation Memory eXchange is an industry standard format for exchanging TM between CAT tools.
XML Localisation Interchange File Format is a document format used to exchange translatable content between CAT tools.
Recommended Best Practices
If you keep the following points in mind, you should be able to maximize your existing translation memory when you send your DITA documents for translation:
Ensure your translation service provider uses a tool that supports TMX (Translation Memory eXchange). This will ensure you can migrate your TM between CAT tools that support the industry standard for TM interchange. This is important not only to free you from dependence on a single translation service provider, but also to allow you to fine-tune your segmentation rules to better match your DITA-based XML source documents you'll be sending for translation.
Provided the structure of the DITA-based content has not changed radically compared to the legacy documents, the CAT software should achieve exact matching on most segments in the TM. As long as the legacy TM aligns with the DITA source at the sentence level, the translation software should be able to achieve leveraged matching for the elements. Good CAT tools break the DITA block elements down into sentence-level segments, which will ensure better matching of the legacy TM. Usually, the DITA content is transformed into XLIFF, which can handle segments at the block or sentence level.
Inline elements may not match at all, or may only fuzzy match. If a CAT tool is used to preprocess the TM to prepare it for the DITA-based translation project, then inline elements should yield an exact match. Note that a good TM engine should help you recover 70% of the inline tags, which is the main area where matching is prone to fail.
If conrefs are used as containers for reusable text, then these items may not exactly match (only fuzzy match at best). However, since each of these items needs to be translated only once, and should at least fuzzy match, it should not result in significant translation expense. For best practices on using conref elements in DITA documents that need to be translated, please see XREF TO CONREF BEST PRACTICE.
When text entities are used as containers for reusable text, it is preferable to use a CAT tool that extracts translatable text from the XML files using an XML parser. The XML parser will insert the content of the text entities into the source text that the translator uses as a reference. This allows the translator to check that the translated segments flow correctly in the target language. If text entities are translated separately from the context where they are used, there may be grammatical inconsistencies in the final text when the translated DITA files are published.
You can export the legacy TM to a TMX file. The TMX file is an XML file, which can be manipulated to better align the translation segments with the DITA markup. The modified TMX file can then be converted back into a TM. This new TM will provide more exact matching against your DITA content than the legacy TM will.
This process of creating a better aligned TM should result in an improvement of 10-20% on TM matching. Whether it's worth the effort and expense in doing this depends on the size of the DITA documents to be translated and the number of target languages. If the number of target languages is small, it may be more economical to retranslate fuzzy matches in a separate file. However, if the word count is high and there are many target languages, tuning the TM will always yield substantial translation savings.
When tuning your legacy TM, take the following into account:
Unmatched tags — Unmatched tags can result from conditional text marked up in legacy tools (such as FrameMaker), or when block elements contain several sentences that share a common format marker (for example, a paragraph containing several sentences marked as bold; the first sentence contains only an opening bold tag, and the last sentence contains only a closing bold tag).
Segmentation rules — The segmentation rules used for translating legacy material may not be well suited for XML documents. For example, your legacy Word or FrameMaker-based segmentation rules may include a rule to terminate a segment after a colon, to separate a procedure title from the steps. Since DITA uses markup to indicate where the procedure title ends and the steps begin, this segmentation rule can be discarded.
When your DITA content is ready to be translated for the first time, do the following:
Export the DITA documents to XLIFF.
Import the XLIFF files into your CAT tool.
Run the translation against the TM.
You should get exact matching on the plain text and fuzzy matching on the tags. It may be possible to automatically recover 70% of the tags. Depending on the algorithm used to measure quality, this means you will achieve about 80% to 95% matching overall.
Once the translator has completed the translation, the TM should be exported as a TMX file. This TMX will now correctly tag the DITA block elements as well as correctly segment the sentences, and should therefore be used as the TM for the next DITA-based translation project. For future localization projects, the new TMX should yield exact matching at the segmentation level used for translation (block or sentence).
It should be noted that, in general, although sentence level segmentation provides better matching, working with segmentation at the block level improves the quality of the translation. For example, you may need three sentences in Spanish to translate two English sentences. The resulting Spanish translation will read better if the paragraph is translated as a block instead of isolated sentences.
If the best practices discussed above are used, the first translation of the DITA content can include new content. There is no need to translate the DITA content after migration to DITA before adding new content to the documents.