It may seem counterintuitive, but audio is a primary medium for providing
accessibility to many persons with disabilities. Whether it is an audio
recording of someone reading text, or a realtime computer generated "Text To
Speech (TTS)," rendition, persons who are blind, or who live with severly
impaired vision or a learning disabilities, often use audio as their primary
reading modality. While we do not expect ODF applications should become audio
recording and editing applications, there are nevertheless critical
considerations that should be observed in order that documents produced by ODF
applications might easily be used to create the smart audio renditions
increasingly used by people who rely on audio renditions of textual content.
5.1 Where and How Audio Is Used for Accessibility
Audio renditions of textual content are so common and powerful that they have
been codified in an ANSI/NISO
standard, Z39.86. This same specification has been adopted internationally
by a consortium of libraries for the blind and print handicapped called the DAISY Consortium. In turn, this ANSI
specification served as the basis for the U.S. legal mandate to provide
accessible text books and curricular material in U.S. Schools, known as NIMAS.
Materials produced in audio include everything from novels for leisure
reading, to newspapers, magazines, and technical reference material (in
addition to curricular material). There are also national programs for
creating and distributing such content across Europe, Canada, Australia, and
Japan, as well as many other countries.
5.2 How ODF Fits In
ODF authoring applications are relevant to alternative media production,
including audio, simply because alternative media such as audio renditions are
just that--a different media edition of the same content produced for visual
consumption by ODF applications:
Item: Soft Page Breaksand Hard Page Numbering
Ref to spec revision][Ref to spec revision]
Even the simplest content cannot be discussed accessibility in a group
environment if there is no simple mechanism to point group members to a
particular location in the document. Paginagion is the most common resolution
to this problem,, however pagination is inextricably tied to a particular
rendring of content. Alternative media such as audio (and large print and braille) must have mechanisms to
allow their users to know where they are relative to the hard page numbers of
the primary source document.
Item: Structural Markup
Effective use of audio renditions requires that users have the ability to move
quickly back and forth through the audio rendition based on the structure of
the document. Traditional audio playback equipment provided fast forward and
rewind mechanisms, but these are highly inefficient because time offsets are
actually irrelevant to content. What is relevant is the structure of the
document? Does it have Chapters? Sub Sections? Footnotes? Sidebars?
Paragraphs? Effective support of alternative rendering, and especially audio
rendering, requires that the source document be correctly tagged with
structural markup. Indeed, the aforementioned ANSI and NIMAS specifications
provide XML based markup to allow rendering agents to support quick movement
forward and backward through content based on chapters, subsections,
footnotes, paragraphs, and other structural elements. The most usable devices
allow users to adjust "levels" of navigation, so that hierarchical structures
such as X.Y.Z might be navigated at the X level, the Y level, or the Z level,
at the user's option.