OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

search-ws message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: Draft of "Annex D - Extensions for Alternative ResponseFormats" for Review


Hi Ray:

Thanks for replies.

I guess the main thing here is that response types need to be listed (with
server defaults) in the Explain record. This is similar to OpenSearch
description which explicitly lists out the URI templates per mime type.

So Explain/Description get hitched a little closer.

What I do want to avoid (and we do if server defaulting allowed) is
unnecessary parameters in the request. Ideally one (or possibly two) would
normally suffice: query (and possibly mime type). And then pagination as
required.

Re 'unpacked' you say:

>    I don't know if this answers your
> question but I think we'd need additional use cases.

The nature.com OpenSearch instance provides a extremely clear and compelling
use case. Representing record data properties in natural schema form for PAM
currently requires *4* levels of nesting (or namespacing). That's way too
much. (I just don't buy into the "only for machines" viewpoint.) And that's
why we introduced the notion of 'unpacked' to break out the data properties
free of the SRU marshalling schema - and as one would present if no SRU
elements were present. This adds *0* levels of nesting.

(This presents a distinct variant SRU response - with data pre-assembled.
And reason for keeping other SRU elements is for general expressiveness -
e.g. diagnostics, query, facets, extra data, etc.)

Re 'json' you say:

> I'll have to leave all the JSON complexity to the rest of you as I have no
> insight at all into this issue.  And it sounds like it's only JSON that
> presents a problem here, right?

Yes, it's true that JSON is problematic. Also true that JSON might possibly
be the most useful return format. E.g. it potentially allows us to add
search *inline* on web pages, i.e. query and results on same page ... on any
page. That's quite a big deal.

I'll noodle some more on the questions I raised earlier and will see if I
can get some better answers before next meeting.

Cheers,

Tony


On 15/12/09 21:12, "Ray Denenberg, Library of Congress" <rden@loc.gov>
wrote:

> 
> Thanks, Tony .....
> 
> From: "Hammond, Tony" <t.hammond@nature.com>
>>    - Is that reasonable? (That there can be a default format for a content
>> type?)
> 
> Yes, I think so.  I think there can be a standard (SRU) wide default for a
> given content type which could be overiden by a server where the default is
> listed in Explain.
> 
> 
> 
>>    2. There's a wrinkle with RSS. In theory RSS 2.0 is usually specified
>> with mime type 'application/rss+xml' and RSS 1.0 with
>> 'application/rdf+xml'.
>> In practice, client software generally expects '.../rss+xml' for either
>> flavour. We recognized this dilemma in the CrossRef recommendations for
>> RSS
>> [1] (and see especially [2]) which ends up recommending 'application/xml'.
> 
> All the more reason for the responseType parameter, I would think. If you
> have to ask for content type 'application/rdf+xml' when you want RSS and
> worse, 'application/xml', you really need another parameter to disambiguate.
> 
> 
>>    - I have added both RSS flavours under mime type '.../rss+xml' making
>> RSS 1.0 the default. Some may prefer to make RSS 2.0 the default. (Note
>> that
>> CrossRef especially recommends RSS 1.0 for interop reasons - data model,
>> etc.)
>> 
>>    - One way out of this impasse would be to use '.../rss+xml' for RSS 2.0
>> and '.../rdf+xml' for RSS 1.0 (although that might usurp any other RDF/XML
>> serialization). But then we run foul again of client software
>> expectations.
>> I guess I'm in favour here of keeping '.../rss+xml' as single mime tyep
>> for
>> RSS.
> 
> As I see it, Explain lists each mime type supported, and for each, a list of
> response types supported for that mime type.  So we don't have to decide.
> I don't think we need to assign a standard-wide default for the hard ones.
> 
> 
>>    - I don't know how to handle this last (especially) without turning the
>> annex into a handbook. Are the response types indicative or prescriptive?
> 
> Indicative.
> 
> 
>>    4. There are two levels of organization: the main structure and the
>> record data substructure. I don't believe there needs to be much variance
>> in
>> the main structures (although RSS maybe belies that - depending on whether
>> these are viewed as siblings mimetypes or different). What does vary (and
>> will from producer to producer) is the record data payload. I guess we
>> should allow latitude in that and only use response type to denote a major
>> variance in the main structures. Is that reasonable?
> 
> Probably.
> 
>>    - Record data structure is where my 'unpacked' notion came in. Is it
>> flat or structured as per the original XSD? Does responseType apply to the
>> record data organization? Should there be one or more possibilities to
>> arrange the record data elements?
> 
> If I understand  correctly your motivation, you want to use the SRU mime
> type, with a responseType that would render the resulting record flat.
> Thus a mime type of 'application/sru+xml' and a responseType a URI something
> like info:srw/responseType/unpacked. For those who want to use SRU according
> to the published schema, same mime type with a default response type of,
> say, infor/srw/responseType/default.    I don't know if this answers your
> question but I think we'd need additional use cases.
> 
> 
>> 
>>    5. Namespaces (especially in JSON). I don't know know what we should
>> recommend. RSS 1.0 (as RDF/XML) requires all elements (but RSS) - and
>> attributes - to be namespaced.  Good practice would indicate that that is
>> a
>> sensible policy. However with JSON there is a problem. With NPG's JSON we
>> have namespaced as 'dc:term' and 'prims:term' and then also included a
>> namespace table within the feed.  Where the OpenSEarch folks are going is
>> hard to say. They probably want to drop namespaces altogther. But that
>> doesn't help with metadata. One possibility is to use objects for
>> namespaces
>> (which is proper JSON way of compartmentalizing), e,g,
> 
> I'll have to leave all the JSON complexity to the rest of you as I have no
> insight at all into this issue.  And it sounds like it's only JSON that
> presents a problem here, right?
> 
> 
> --Ray
> 


********************************************************************************   
DISCLAIMER: This e-mail is confidential and should not be used by anyone who is
not the original intended recipient. If you have received this e-mail in error
please inform the sender and delete it from your mailbox or any other storage
mechanism. Neither Macmillan Publishers Limited nor any of its agents accept
liability for any statements made which are clearly the sender's own and not
expressly made on behalf of Macmillan Publishers Limited or one of its agents.
Please note that neither Macmillan Publishers Limited nor any of its agents
accept any responsibility for viruses that may be contained in this e-mail or
its attachments and it is your responsibility to scan the e-mail and 
attachments (if any). No contracts may be concluded on behalf of Macmillan 
Publishers Limited or its agents by means of e-mail communication. Macmillan 
Publishers Limited Registered in England and Wales with registered number 785998 
Registered Office Brunel Road, Houndmills, Basingstoke RG21 6XS   
********************************************************************************



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]