[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]
Subject: RE: [dsml] Proposal for concurrent request envelope
My logic is as follows: (1) It is important to maintain the *ability* to have a positional correspondence between requests and responses, as it's simpler for the client to digest. (2) Clients should not be forced to choose between the simplicity of positional correspondence and the performance of processing="parallel" operations. I.e., I believe that DSML 2 *must* support a mode in which operations are performed in parallel and positional correspondence is maintained. If there are significant differences in the speeds of processing="parallel" BatchRequest operations, the server has many options to choose for efficiency. For example: Let X = the position of the first request that has been issued for which a response has not yet been received. Let Y = the position of the last request that has been issude for which a response has not yet received. The server is free to decide to limit the difference Y - X to some reasonable value (i.e., by not issuing request Y+1 until it receives the response for X), just as it is free to limit the number of operations it has outstanding against the directory at any given instant. I have no problem stating that if clients include operations in a processing="parallel" BatchRequest that differ wildly in execution time then they may not stream results form the server as fast as they would were the requests more uniform. (But such clients will still tend to receive results much faster than they would were all operations processed serially.) Re RequestID, good catch. The RequestID should be an attribute rather than a child element of the BatchRequest and BatchResponse elements. -J -----Original Message----- From: Rob Weltman [mailto:robw@worldspot.com] Sent: Sunday, October 14, 2001 10:15 PM To: dsml@lists.oasis-open.org Subject: [dsml] Proposal for concurrent request envelope The issue I raised in Wednesday's teleconf was that the ordering requirement for responses in a batchResponse with parallel mode may be expensive for a server to implement and negate some of the benefits of parallel mode for a client. A server must be prepared to buffer the results of all requests before beginning to return a response document to the client, and the client may not begin to receive the response document until the server has assembled all responses. It was mentioned that in many cases the results will be available on the server in roughly the same order the requests were issued (i.e. their order in the request document), but there are no guarantees and the server must be prepared for the cases where the asynchonous requests do not yield results in the same order. Christine pointed out that it may be better to separate the parallel case as a separate request (and response) envelope. By selecting the concurrent request envelope, the client is asserting that it doesn't care in which order the operations are executed _or in which order the responses are returned_. The concurrent request must include a request ID for each operation and the concurrent response must associate the request ID with the corresponding operation response. I also changed the batchRequest and batchResponse so that there is an optional request ID with each operation (instead of an optional and unlimited number of requestIDs associated with the batchRequest and batchResponse as a whole). Rob
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]
Powered by eList eXpress LLC