OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

user-assistance-discuss message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [user-assistance-discuss] Thoughts on Graphical Callouts


This is my last message on this topic because it is clear to me that
developing a help authoring standard that could not accomodate images
would be self-defeating and useless. Nobody would implement it. Nobody
would use it. 
 
You are conflating two different approaches. 
 
1. One proposal is to disallow markup targeted towards visually oriented
people. This would doom a standard to irrelevance.
 
Difficulty comprehending textual descriptions is a legitimate problem
that some people have and in fact we all have that difficulty to some
extent or another. Some things can be more quickly comprehended
visually. Some people will need the textual directions that some mapping
sites can generate. Other people will prefer the maps. I, personally, do
not think that Web standards that give mapping sites the choice are
misguided. Some people want to see a table of historical stock prices.
Other people want a graph. Providing the graph does not disallow you
from providing the table.
 
Other things are specifically designed for visually-oriented people. It
makes no sense to describe a toolbar button as "the little button with
the scissors." It is a job responsibility of thousands of technical
writers to document the behaviour of toolbar buttons and hardware
controls.
 
I have, in fact, discussed these issues with visually impaired people in
the past. I have never once heard the point of view you espouse. I
suggest you look at the work of T.V. Raman (long time W3C contributor). 
 
2. Another proposal is to allow or even require markup targeted towards
visually impaired people. Great! This direction could be quite fruitful.
This is the approach taken by T.V. Raman (and most well-known
accessibility advocates). Multimodal input:
 
http://almaden.ibm.com/u/tvraman/chi-2003/mmi-position.html
 
"Symmetric use of modalities where appropriate can significantly enhance
the usability of applications along this dimension; for example, an
interface that can accept input via speech or pen might visually
highlight an input area while speaking an appropriately designed
prompt."
 
"Multimodal interfaces need to adapt to the user's environment to ensure
that the most optimal means of completing a given task are made
available to the user at any given time. In this context, optimality is
determined by: 
The user's needs and abilities	
The abilities of the connecting device	
Available band-width between device and network	
Available band-width between device and user	
Constraints placed by the user's environment, e.g., need for hands-free,
eyes-free operation.	
"
 
 
________________________________

From: marbux [mailto:marbux@gmail.com] 
Sent: Tuesday, May 09, 2006 8:30 PM
To: Irwin, Barbara
Cc: Paul Prescod; user-assistance-discuss@lists.oasis-open.org
Subject: Re: [user-assistance-discuss] Thoughts on Graphical Callouts



	
	
	
	On 5/8/06, Irwin, Barbara <Barbara.Irwin@eclipsys.com> wrote: 

		That's an interesting point, but users can be
handicapped in many ways -
		including some who cannot read. 

	
	That is a true statement. Blind people are an example, except
for those who have learned to use electronic Braille displays. See
<http://en.wikipedia.org/wiki/Refreshable_Braille_display>. JAWS (Jobs
Access for Speech) is presently the most widely used screenreader
program for Braille readers, but is limited to DOS and Windows.
<http://en.wikipedia.org/wiki/JAWS_%28screen_reader%29>. While JAWS can
convert screen text to speech via voice synethesizer, its functionality
is largely one-way. It adds little to an end user's ability to control
applications. 
	
	Others who can not read include sighted people who have never
acquired the skill. If that was the group you had in mind, graphics are
not a good  solution to their problem. E.g., how are they even going to
find the Help menu at the top of the application's screen if they can't
read? How are they going to navigate within the Help system? And even if
you get past that barrier, how many images would have to be created to
translate all of the text in a Help system into understandable graphics?

	
	The way forward for Help systems is a combination of web
browsers, text, markup, VoiceXML, and its related standards.  See
<http://en.wikipedia.org/wiki/VoiceXML>:
	
	>>>
	

	"VoiceXML (VXML) is the W3C
<http://en.wikipedia.org/wiki/World_Wide_Web_Consortium> 's standard XML
<http://en.wikipedia.org/wiki/XML>  format for specifying interactive
voice dialogues between a human and a computer. It is fully analogous to
HTML <http://en.wikipedia.org/wiki/HTML> , and brings the same
advantages of web application development and deployment to voice
applications that HTML brings to visual applications. Just as HTML
documents are interpreted by a visual web browser, VoiceXML documents
are interpreted by a voice browser
<http://en.wikipedia.org/wiki/Voice_browser> . A common architecture is
to deploy banks of voice browsers attached to the public switched
telephone network ( PSTN
<http://en.wikipedia.org/wiki/Public_Switched_Telephone_Network> ) so
that users can simply pick up a phone to interact with voice
applications.

	"There are already thousands of commercial VoiceXML applications
deployed, processing many millions of calls per day. These applications
perform a huge variety of services, including order inquiry, package
tracking, driving directions, emergency notification, wake-up, flight
tracking, voice access to email, customer relationship management,
prescription refilling, audio newsmagazines, voice dialing, and
real-estate information. They serve all industries, and range in size
all the way up to massive national directory assistance
<http://en.wikipedia.org/wiki/Directory_assistance>  applications.

	"VoiceXML has tags that instruct the voice browser
<http://en.wikipedia.org/wiki/Voice_browser>  to provide speech
synthesis <http://en.wikipedia.org/wiki/Speech_synthesis> , automatic
speech recognition <http://en.wikipedia.org/wiki/Speech_recognition> ,
dialog management, and soundfile playback.
	
	. . . . .
	
	"Two closely related W3C standards used with VoiceXML are the
Speech Synthesis Markup Language (SSML
<http://en.wikipedia.org/wiki/SSML> ) and the Speech Recognition Grammar
Specification ( SRGS <http://en.wikipedia.org/wiki/SRGS> ). SSML is used
to decorate textual prompts with information on how best to render them
in synthetic speech, for example which speech synthesizer voice to use,
and when to speak louder. SRGS is used to tell the speech recognizer
what sentence patterns it should expect to hear."
	
	<<<
	 
	See also World of VoiceXML. <http://www.kenrehor.com/voicexml/>.

	
	Now let's take a quick look at a VoiceXML implementation,
HAWHAW, a rapidly rising star in the open source software scene.
<http://www.hawhaw.de/>. HAWHAW is licensed under the Gnu Lesser General
Public License, (The same license is used for OpenOffice.org, and
allows Sun Microsystems and IBM, among others, to incorporate OOo code
in, respectively, the StarOffice and IBM Workplace proprietary
applications.) 
	
	>>>
	
	"With HAWHAW you can publish WAP pages which are also accessible
by HTML standard browsers. HAWHAW automatically determines the
requesting device's capabilities and creates appropriate markup code.
Many users confirmed that HAWHAW created WML output is compatible with a
lot of mobile devices. 

	"You don't have to care about all the different browser types
available today. The markup language code is generated by HAWHAW in
order to achieve a maximum of portability across so many different
handheld devices as possible. Especially WAP devices often need
browser-optimized WML code for usability reasons.

	"In the early days HAWHAW was a small PHP class library,
restricted to create WAP pages. But in the meantime HAWHAW was enhanced
step by step and now it is not restricted any more to generate HTML and
WML code only! If requested by the user agent, HAWHAW sites are capable
to serve XHTML, HDML, i-modeTM , MML, PDA's and voice browsers as well!
XHTML (WAP 2.0) is the successor of WML 1.x and bridges the gap between
WAP and web applications. HDML is some sort of old-fashioned WML
predecessor and is still used, e.g. in North America. i-mode is
currently widely used in Japan, but is expected to become a serious
alternative to WML. MML is short for Multimedia Markup Language and is
like i-mode used in Japan. PDA-browsers are served by HAWHAW with
"handheld-friendly" HTML.

	"Since Version 5.0 HAWHAW additionally supports VoiceXML. That
means all HAWHAW applications are voice-enabled per default. The HAWHAW
API offers many voice specific options, making HAWHAW a full-featured
development tool to create interactive voice applications.

	"And from version 5.6 onwards, HAWHAW supports special output
for the Lynx text browser. This "archaic" browser is still used today,
often by handicaped people with screen readers and other special
equipment. HAWHAW's Lynx support allows to create barrierfree
applications, which validate Bobby-AAA-approved
<http://bobby.watchfire.com/>  out of the box and additionally are
accessible from each telephone by means of HAWHAW's VoiceXML support."

	<<<
	
	The existing accessibility electronic assistive technology stack
rests on a foundation of hardware and text. 
	


		Software systems are designed for many
		functions, and the purpose of help systems is to help
users navigate 
		those systems. Graphics (including those that lack
artistic merit) are
		one of many tools available for that purpose, and can be
very efficient
		where the written word is not.


	In such situations, how do you meet the informational
requirements of the blind? Please enlighten me if you have a solution. I
am not close-minded. I allow myself to be persuaded by others every day.
	


		Individual help authors are in the best position to
determine what their
		users need, and can understand. 


	Your first clause is correct but the history of the Help
authoring industry teaches your second clause is erroneous. Perhaps you
might provide me with an example of your own work for review in regard
accessibility issues?
	 


		They're the ones who receive feedback
		directly from their users, and conduct usability
studies. I think our 
		purpose here is to provide the widest tool-set possible,
that will allow
		authors to create help in a manner that meets the needs
of their users.


	Please carefully consider my words. Handicapped accessibility is
not optional. I have proposed a method of programmatically addressing
that issue through the process of this standardization proceeding. No
naysayer has proposed any programmatic method of addressing the issue
within the context of this process.  All proposals other than mine
undermine the W3C VoiceXML recommendation and associated standards and
ignore the legal obligations of your employers and customers.  


		From my perspective, what you're describing is an
educational issue, not 
		a "tools" issue.


	You apparently confuse whose education is under discussion. It
is a tools issue. As only a partial solution, many accessibility markup
issues could be enforced by validating input against a schema during an
XML transformation. E.g., the Bitflux and Kupu web app editors validate
against a RelaxNG schema, BFE in near real-time and Kupu upon page save.
Neither will allow a page to be saved unless XHTML is valid. 
	
	An implementation of this example that is demonstrably feasible
would be for this proposed standard to require that implementations must
validate any HTML or XHTML input files for <alt> tags being present,
validly formatted, and occupied by text for each instance of a graphic
image in a file. If the validation fails the process aborts.  There are
a number of validators around the Web including those provided by the
W3C. Similar markup validation could be required for all other formats
converted to Help files via this standard's XML. 
	
	XML standards normally require validation and compliance. That
is one of the strengths of XML.
	
	Similar validation requirements could block Help authors from
converting files without valid VoiceXML file navigation markup. 
	
	We are not talking rocket science here, as some say. All you
have to do is to cross every toy off your list for which you can't up
with an accessibility solution. Mankind managed somehow to survive and
flourish before bit-mapped computer screens came along. 
	
	Perhaps someone might explain to me the logic behind making it
easy for Help authors to write non-accessible Help files and why that is
better than making it difficult. In every office system I designed, I
tried to make doing things the right way the easiest way. I truly do not
understand where you people are coming from.
	
	--Marbux
	




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]