OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

dita-learningspec message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [dita-learningspec] Groups - DITA Learning Content SC Meetingmodified



Hi Wayne

I hope your secondment to ADL was a good one.  I am sure it will have 
brought great benefits to the overall S1000D/SCORM agenda amongst other 
things.

Thanks for the update on the S1000D status.  After all the push for 
S1000D I must admit I was under the impression that the US DoD had 
issued the Instruction whereas the ADF had only recommended it.  Silly 
me.  It only took about 4 or 5 years to get the instruction on SCORM so 
why should S1000D have jumped the queue between talk and action, eh?

So, to the discussion below.  Clearly, I will not challenge the 
information you have presented so that is not at issue.  The very rough 
scenario that I rather sloppily threw together was not meant to be a 
down the line factual case, only illustrative.  Nonetheless, it has done 
its job.  You have provided new information that serves to advance the 
debate.  Let's say that S1000D was not even on the table.  (It would be 
easier if it were the only spec to deal with)  The point of this 
discussion, as I intended anyway, is to flush out a lot of the high 
level stuff that we need have in our plans before get too tied up in the 
weeds.  We will have to talk about metadata mapping and what needs to be 
included in the dita xml files and what fields we need etc etc and that 
will be a detailed task to work through.  Scott has given us a great 
start with his spreadsheet and that is a great piece of work.  My 
concern is that we have jumped to that level of detail a little too 
quickly.  I think we all need to have a clear understanding of lifecycle 
and impacts and the reuse models and such things before we can be sure 
we have accounted for the work in a way that is adequately future 
proofed.  Today we are talking DITA, SCORM, possibly S1000D and then a 
bunch of other specs that are kicking around.  Not just in the defense, 
aviation and tech writing spaces, but all sorts of others as well.

The high level question would have to be the one that DITA faces all the 
time - When you should you have a specialization and why?  Followed by 
some guidance on what it should contain to be complete and possibly, 
what things should it not do.

My point about the metadata was from the following perspective and 
please, anyone, challenge my assumptions if they are incorrect or misstated.

The assumption in many contexts, as it is in this one, is the ability to 
single source content.  Create it once and then reuse it different 
contexts.  The problem comes from two directions.  Firstly, there is the 
whole idea of who should be allowed to make any changes to authorized 
content.  If I am the semi-conductor guy, then I create the content 
through my own processes, authorize it and then make it available.  Who 
should be allowed to fiddle with it?  The current indication from 
various sources is that no one else should be allowed to.  If that's the 
case, and the metadata must be stored inside the dita content files, 
then it has be part of the original creation process, which in turn 
means that all that metadata must have been put into the file at the 
time of authoring.  That's a tough call.  The lifespan in aviation and 
defense could be quite short for some pieces of electronics, but it 
could be up 30 years or more.  We don't know what uses it may be put to 
in terms of inclusions in variants or exclusions from variants (of 
vehicles that is) over its lifespan.  The fundamental question is how we 
should deal with that situation in terms of metadata and content 
revision etc.

One could think up many issues where content in a single sourcing model 
will cut across specializations.  The specialized work becomes the more 
it gnaws away at the value of single sourcing and if you are dealing 
with content ownership and authorization issues it can lead to a point 
where the ability to reuse the content as broadly as we may have 
intended becomes undermined to a large extent.  The same issues apply in 
a piece of healthcare apparatus, and in fact, from my experience in some 
of the healthcare scenarios, the FDA has tighter control than the DoD 
ever will.

I guess this is a call for some clarity at the architectural level 
regarding the way this stuff is supposed to work and provide the long 
term, through-life, maintainable solution we would all like, 
irrespective of the industry sector - if that's possible.  If not, let's 
know where the boundaries are and when we may be tempted to cross them.

Sorry for the long message, and congratulations if anyone got this far!

Allyn

Wayne Gafford wrote:
> greetings
>  
> apologies for missing last thursday. i returned home for the first time in six months to visit and brief my chain of command on my work at adl. my thursday was packed with required meetings. will look forward to this thursday.
>  
> there is much to reflect upon below. i'll touch on the u.s. dod s1000d mandates. the navy's navair is the only dod entity that requires by policy s1000d for tech data. the other dod entities do not mandate s1000d. navsea has a digital data policy that says all tech data will be procured in xml, thereby making s1000d allowable. the s1000d policy decision process in the office of the secretary of defense (osd), taken up within acquisition, technology and logistics (atl) dept, has virtually broken down. retirements, turnovers, and an unfortunate reversal towards product data management (pdm) within atl has left a vacuum. however, the broad applicability of s1000d remains for technical data, technical training data, planned maintenance data, system testing and evaluation data, parts data...all data that must support a common system.
>  
> in the semiconductor instance set in a flight navigation equipment context, navair will require s1000d. if its an air force project, their own specs may be used. regardless of what type of xml is used, i look at the scenario from a life cycle perspective. what is in the best interest of long term product life cycle management? life cycle system data management is best executed when all content is managed in a common digital data format. this suggests the right spec for the right data. if the tech and life cycle data documents systems with assemblies, sub assemblies, parts and applicability, then s1000d. if the tech data does not require any kind of system configuration through parts and applicability tracking, then dita or another type of industry spec is better. 
>  
> for the semi conductor scenario if in a navair context, use meta data that is germane to the chosen spec then implement in business processes (i can elaborate on the next phonecon). in s1000d, the meta data is in the <idstatus> wrapper. this metadata names and identifies files for configuration and applicability. all support data, including technical training, goes into s1000d data modules. however, in the training content case, the learning content models developed over the last 10 months would be used within the s1000d environment wrapped by the <content> element. the <idstatus> metadata will describe each individual file stored in a common source database. when the training-specific data modules are ready for scorm conformancy, then the lom comes into play...as the lom will describe the entire content package. 
>  
> one of the 5 adl co-sponsored s1000d change proposals to support training offers to import the lom directly in the publication module (pm). the pm, acting as the manifest-like aggregating function, is well suited to support the lom construct during content production (there are some variations being offered that i can discuss in the next phonecon). the lom, pm and learning content are created during production. 
>  
> if the content is not based through life configuration (ie...no tie to parts, schematics, applicability, configuration and specific naming conventions), then use a different spec. the use of one spec for new content acquisition helps to simplify an already complicated process. use the learning content models in a spec chosen to meet data processing requirements. this is where adl sees value at a level of interoperability that demonstrates how a consistent set of learning content models can work in any specification.
>  
> wayne
>  
>  
>
> ________________________________
>
> From: Allyn Radford [mailto:allynr@learnilities.com.au]
> Sent: Sun 7/22/2007 6:47 PM
> To: scott.hudson@flatironssolutions.com
> Cc: dita-learningspec@lists.oasis-open.org; Kevin Ruess
> Subject: Re: [dita-learningspec] Groups - DITA Learning Content SC Meeting modified
>
>
>
>
> Hi all
>
> Just one correction in the minutes.  In the notes below there is a
> change required to the following statement:
>
> Allyn brought up that we could consider an organization
>
> that groups elements/attributes that describe the behavior of content (like conditional display)
>
> in a separate, referenced metadata file but embeds elements/attributes that more directly
>
> describe the content itself, inside the topic or map files.
>
>
> I actually made the comment the other way around.  The purely
> descriptive content could be in an associated file whereas the
> "functional" metadata could be inside the content file.  These are
> suggestions only and I must confess that I have not yet settled in my
> own thinking on this.  It is 'a' possibility.
>
> As an extension to this that was not covered in the meeting, so is not
> applicable to the minutes, is the problem that sits behind this issue. 
> I think I first had to consider it about 6 or so years ago when building
> an infrastructure that required content to be interoperable and
> discoverable.  When we considered the QTI type content and were tackling
> the issues of granularity of that content and how it should be stored,
> we were somewhat sobered by the consideration that we could end up
> creating a whole bunch of metadata files that were substantially larger
> than the individual QTI objects they described.  Is that really worthwhile?
>
> There is another piece of thinking we need to join to this as well. 
> There will never, ever be a single metadata schema that suits all
> purposes.  The metadata will be deemed to be correct for the needs of a
> community of practice by that community of practice and is not going to
> be a subset of a single metadata schema that suits all the metadata
> purposes of the world.
>
> Given those two issues of a) metadata strategy and file size of metadata
> compared to content, and b) no single metadata schema will suit all
> needs - then what goes into DITA and what does not?  Our current
> considerations are limited because we are only considering a learning
> specialization for SCORM which relies on LOM.  What about other
> requirements.  Let me pose the following illustrative problem:
>
> There is a new specialization starting up for the semiconductor
> industry.  Let's say I am in that industry and produce a component that
> will be used in a piece of flight navigation equipment that will be
> built in to an aircraft that is supplied to the US DoD.  That
> procurement process requires the technical documentation and the
> training be supplied to each of the upstream vendors.  If I am creating
> the content in DITA format, there will be a requirement by both the
> aircraft industry and DoD to have the content in S1000D format.  Ok,
> that could be a transform.  Then it would need to be *dynamically*
> transformed into training content.  Another transform, because it would
> have to be SCORM-based.  Both S1000D and SCORM are mandated by defense
> instructions.  So, what metadata schema is used?  When is all the
> appropriate metadata added and by whom? and where?  We are now talking
> about metadata for the semiconductor industry, plus S1000D metadata,
> plus LOM/SCORM metadata plus whatever is required by the individual
> companies involved along the way for data management purposes
> internally.  Is accumulating all the metadata in the dita file
> sustainable?  Does that simplify or complicate maintenance of both
> content and metadata?  Will the metadata elements whether empty or
> filled be bigger than the content in the file itself?
>
> Now, I don't have answers to these questions, but I think we really need
> to think about the requirements of the solution to be robust and to
> serve the needs of any industry group that might become involved in the
> authoring, assembly and management (through life) of structured
> content.  We need to keep at least one eye on the issues that arise from
> content strategies during implementation.
>
> Hope this makes sense and that it is useful.
> Allyn
>
>
> scott.hudson@flatironssolutions.com wrote:
>   
>> Thanks to John Accardi for taking minutes!
>>
>>  -- Scott Hudson
>>
>>
>> DITA Learning Content SC Meeting has been modified by Scott Hudson
>>
>> Date:  Thursday, 19 July 2007
>> Time:  04:00pm - 05:00pm ET
>>
>> Event Description:
>> USA Toll Free Number: 866-880-0098
>> USA Toll Number: +1-210-795-1100
>> Australia Toll Free Number: 1-800-993-862
>> Australia Toll Number: 61-2-8205-8112
>> PASSCODE: 6396088
>>
>> For information on specific country access dialing, see http://www.mymeetings.com/audioconferencing/globalAccessFAQ.php.
>>
>> Agenda:
>>
>>
>> Minutes:
>> DITA Learning Content SC minutes 19 Jul 2007
>>
>> Attendees:
>> John Accardi
>> Allyn Radford
>> Robin Sloan
>> Scott Hudson
>>
>>
>> Primary agenda was to advance the IEEE LOM work initiated by Scott. Task was to fill in column H
>>
>> with elements/attribute in our SC structures that would apply.
>> Much discussion began but not many value were filled in ... it seems like a slippery task
>>
>> Key discussion points:
>>
>> - IEEE LOM elements are mostly optional so that implementing organizations can select subsets
>>
>> important them for required treatment.  Allyn mention that it is rare that any implementing
>>
>> organization uses all the elements/attributes.  They pick the subset that works for them and
>>
>> makes them required as necessary to support processing and deployment.  In other words, standards
>>
>> bodies make things optional for flexibility and implementing organizations make things mandatory
>>
>> to support their specific business requirements.
>>
>> - Scott mentioned that the LOM has nailed down the meaning, vocabulary and intentions of some
>>
>> things and is very vague about other things.  For example, aggregation level has subjective
>>
>> values of 1, 2, 3 and 4.
>>
>> - Allyn brought up that we should check that Scott's spreadsheet is based on the lastest standard
>>
>> LOM (might be based on an earlier version). Vendor solutions typically going to IEEE LOM 1.0.
>>
>> - Scott needed the group to consider whether the LOM elements need representation across all our
>>
>> info types or only at map levels or both. Allyn brought up that we could consider an organization
>>
>> that groups elements/attributes that describe the behavior of content (like conditional display)
>>
>> in a separate, referenced metadata file but embeds elements/attributes that more directly
>>
>> describe the content itself, inside the topic or map files.  Allyn brought up SCORM SCOs as an
>>
>> example that implements associated metadata files.  Allyn also mentioned that in older HTML work,
>>
>> easier maintenance was supported with separated metadata files. Scott mentioned that the DITA way
>>
>> seemed to be object-oriented; keep the metadata inside the topic and map files, perhaps in the
>>
>> prolog structure.  This way all is organized and travels together.
>>
>> - Scott reminded all that the idea was to be sure that our structures contained all the
>>
>> elements/attributes to map to the LOM, all in support of a LOM manifest. Allyn suggested that
>>
>> such data typically would primarily come from a map.  John suggested that it might be helpful to look at a working manifest and work backwards to the source points in LOM and then to our
>>
>> structures. Allyn suggested we use an example like this from ADL.
>>
>> Suggestion to put structural stuff inside the topics themselves, while descriptive stuff could be
>>
>> handled via an attribute like metaref to point to an external metadata file. This would be more
>>
>> akin to how IMS can point to an external metadata file.
>>
>> - John asked for a big picture clarification of why mapping our elements/attributes to LOM was
>>
>> desirable.  Scott and Allyn responded that if SCORM and IMS are based on some level of
>>
>> implementation of the IEEE LOM, then if our DITA SC also had all the LOM mapped, deliverables had
>>
>> a better chance of playing in those SCORM and IMS worlds.
>>
>> - John mentioned that it was difficult to easily see all our elements/attributes in support of
>>
>> quick, efficiently and best mappings to the LOM.
>>
>> - Allyn brought up potential confusion about what would be metadata versus real content.  For
>>
>> example metadata for one content type might easily be seen as content proper in another. (e.g.,
>>
>> Learning Content vs. Instructional Design)
>>
>> - Allyn also mentioned that the LOM might be inherently insufficient for learning content
>>
>> purposes.  For example, the lack of some thing an learning objective.  Scott agreed so the LOM
>>
>> should be considered just a minimum.
>>
>> - Scott and Allyn spoke about the fact that a high level perspective or diagram is needed, that
>>
>> relates SCORM, DITA, IMS. The lack of this hinders our LOM mapping exercise.
>>
>>
>> This event is one in a list of recurring events.
>> Other event dates in this series:
>>
>> Thursday, 14 June 2007, 11:00am to 12:00pm ET
>> Thursday, 21 June 2007, 11:00am to 12:00pm ET
>> Thursday, 28 June 2007, 11:00am to 12:00pm ET
>> Thursday, 05 July 2007, 11:00am to 12:00pm ET
>> Thursday, 12 July 2007, 04:00pm to 05:00pm ET
>> Thursday, 26 July 2007, 04:00pm to 05:00pm ET
>> Thursday, 02 August 2007, 04:00pm to 05:00pm ET
>> Thursday, 09 August 2007, 04:00pm to 05:00pm ET
>> Thursday, 16 August 2007, 04:00pm to 05:00pm ET
>> Thursday, 23 August 2007, 04:00pm to 05:00pm ET
>> Thursday, 30 August 2007, 04:00pm to 05:00pm ET
>> Thursday, 06 September 2007, 04:00pm to 05:00pm ET
>> Thursday, 13 September 2007, 04:00pm to 05:00pm ET
>> Thursday, 20 September 2007, 04:00pm to 05:00pm ET
>> Thursday, 27 September 2007, 04:00pm to 05:00pm ET
>>
>> View event details:
>> http://www.oasis-open.org/apps/org/workgroup/dita-learningspec/event.php?event_id=15062
>>
>> PLEASE NOTE:  If the above link does not work for you, your email
>> application may be breaking the link into two pieces.  You may be able to
>> copy and paste the entire link address into the address field of your web
>> browser.
>>
>>  
>> ------------------------------------------------------------------------
>>
>> BEGIN:VCALENDAR
>> METHOD:PUBLISH
>> VERSION:2.0
>> PRODID:-//Kavi Corporation//NONSGML Kavi Groups//EN
>> X-WR-CALNAME:My Calendar
>> BEGIN:VEVENT
>> CATEGORIES:MEETING
>> STATUS:TENTATIVE
>> DTSTAMP:20070720T000000Z
>> DTSTART:20070719T200000Z
>> DTEND:20070719T210000Z
>> SEQUENCE:9
>> SUMMARY:DITA Learning Content SC Meeting
>> DESCRIPTION:USA Toll Free Number: 866-880-0098\nUSA Toll Number:
>>   +1-210-795-1100\nAustralia Toll Free Number: 1-800-993-862\nAustralia
>>   Toll Number: 61-2-8205-8112\nPASSCODE: 6396088\n\nFor information on
>>   specific country access dialing\, see
>>   http://www.mymeetings.com/audioconferencing/globalAccessFAQ.php.\n\nGroup:
>>   DITA Learning and Training Content Specialization SC\nCreator: John
>>   Hunt
>> URL:http://www.oasis-open.org/apps/org/workgroup/dita-learningspec/event.php?event_id=15062
>> UID:http://www.oasis-open.org/apps/org/workgroup/dita-learningspec/event.php?event_id=15062
>> END:VEVENT
>> END:VCALENDAR
>>     
>
> --
> Allyn J Radford
> Principal
> Learn'ilities' Pty Ltd
> www.learnilities.com
>
> Solution Architecture Consulting
> Standards-based eLearning Systems and Content
> Digital Content Exchange Planning and Development
>
> Phone: +61 (0)3 9751 0730
> Mob:   +61 (0)419 009 320
>
> --
>
>
>
>   
> ------------------------------------------------------------------------
>
> The original MIME headers for this attachment are:
> Content-Type: application/ms-tnef;
> 	name="winmail.dat"
> Content-Transfer-Encoding: base64
>
>   

-- 
Allyn J Radford
Principal
Learn'ilities' Pty Ltd
www.learnilities.com

Solution Architecture Consulting
Standards-based eLearning Systems and Content
Digital Content Exchange Planning and Development

Phone: +61 (0)3 9751 0730
Mob:   +61 (0)419 009 320

--



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]