OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

odata message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: [OASIS Issue Tracker] Created: (ODATA-240) Better describe, and possible extend, expected behavior of dealing with async $batch requests


Better describe, and possible extend, expected behavior of dealing with async $batch requests
---------------------------------------------------------------------------------------------

                 Key: ODATA-240
                 URL: http://tools.oasis-open.org/issues/browse/ODATA-240
             Project: OASIS Open Data Protocol (OData) TC
          Issue Type: Improvement
          Components: OData Protocol v1.0
            Reporter: Hubert Heijkers


$batch used to return a 202 accepted to indicate it accepted the $batch request even though it might not have read, let alone processed, all the requests in the batch request. This 202 accepted response conflicted with the use of 202 for asynchronous requests; this has been dealt with in ODATA-233.

So now we can distinguish between batch requests that are handled synchronously and asynchronously but that brings up the question of, in asynchronous mode, when to return results from a batch request?

Some of the requests/change sets in the batch might take almost no time to process while others might take minutes if not longer. Our proposal for asynchronous requests wraps the result of such an asynchronous request in a message/http response. Would we collect all the results and only return once the complete batch is processed? Or would we start returning results as soon as the result of the first request is available turning the asynchronous request into a synchronous request from that point onwards?

At a minimum we would have to describe the expected behavior of an asynchronous batch request but we might need to consider adding to what's there already and make it possible to return the results of a batch in chunks (not to confuse with chuck encoding)? Would we have a next-link which we could follow which in turn could return a 202 again if the next chunk of the batch isn't available yet?


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://tools.oasis-open.org/issues/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]