OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

dsml message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Re: [dsml] Proposal for concurrent request envelope




Jeff Parham wrote:

>My logic is as follows:
>
>(1) It is important to maintain the *ability* to have a positional
>correspondence between requests and responses, as it's simpler for the
>client to digest.
>
Agreed

>
>(2) Clients should not be forced to choose between the simplicity of
>positional correspondence and the performance of processing="parallel"
>operations.  
>
Why not?  There is usually a trade off when performance is paramount, 
and if that trade off can be client complexity over server performance 
then I think it would be a good trade.

>
>I.e., I believe that DSML 2 *must* support a mode in which operations
>are performed in parallel and positional correspondence is maintained.
>
Before we get to must, lets establish whether it is possible to honour 
the promise of greater performance that this directive implies.

>
>If there are significant differences in the speeds of
>processing="parallel" BatchRequest operations, the server has many
>options to choose for efficiency.  For example: Let X = the position of
>the first request that has been issued for which a response has not yet
>been received.  Let Y = the position of the last request that has been
>issude for which a response has not yet received.  The server is free to
>decide to limit the difference Y - X to some reasonable value (i.e., by
>not issuing request Y+1 until it receives the response for X), just as
>it is free to limit the number of operations it has outstanding against
>the directory at any given instant.  I have no problem stating that if
>
This algorithm only partially alleviates the problem and does not 
address the key case where one of the requests between X and Y returns 
many entries (thousands, millions whatever) that must be cached until 
previous operations have been completed.  Unless of course the server 
limit is Y - X = 0, or in other words no parrallel processing.  Y-X > 0 
is simply not scaleable when the order of the requests must be honoured 
with the order of responses unless further server limits are applied to 
parrallel requests which would give a different result than if the 
requests were made sequentially i.e. much lower tolerance of outstanding 
result set size with resulting "server unwilling to perform" responses. 
 Perhaps that is a reasonable compromise, but it feels wrong since 
server efforts to control its use of resources essentially negate the 
possible benefits of parrallel processing.  Though even with that 
compromise it is arguable that performance would actually increase due 
to the overhead of additional caching and its maintenance.

>
>clients include operations in a processing="parallel" BatchRequest that
>differ wildly in execution time then they may not stream results form
>the server as fast as they would were the requests more uniform.  (But
>such clients will still tend to receive results much faster than they
>would were all operations processed serially.)
>
There are two assumptions here: a) uniform execution time = no problem 
(it is the same problem), and b) the performance consequences of the 
server conforming to this algorithm is on the client which issued the 
parrallel request only.  I do not think that assumption can be made. 
 The gymnastics which the server must perform to honour response 
ordering can and will detrimentally effect performance for all clients, 
possibly *including* the client which issued the directive.

Pete

>
>Re RequestID, good catch.  The RequestID should be an attribute rather
>than a child element of the BatchRequest and BatchResponse elements.
>
>-J
>
>-----Original Message-----
>From: Rob Weltman [mailto:robw@worldspot.com] 
>Sent: Sunday, October 14, 2001 10:15 PM
>To: dsml@lists.oasis-open.org
>Subject: [dsml] Proposal for concurrent request envelope
>
>
>  The issue I raised in Wednesday's teleconf was that the ordering
>requirement for responses in a batchResponse with parallel mode may be
>expensive for a server to implement and negate some of the benefits of
>parallel mode for a client. A server must be prepared to buffer the
>results of all requests before beginning to return a response document
>to the client, and the client may not begin to receive the response
>document until the server has assembled all responses.
>
>  It was mentioned that in many cases the results will be available on
>the server in roughly the same order the requests were issued (i.e.
>their order in the request document), but there are no guarantees and
>the server must be prepared for the cases where the asynchonous requests
>do not yield results in the same order.
>
>  Christine pointed out that it may be better to separate the parallel
>case as a separate request (and response) envelope. By selecting the
>concurrent request envelope, the client is asserting that it doesn't
>care in which order the operations are executed _or in which order the
>responses are returned_. The concurrent request must include a request
>ID for each operation and the concurrent response must associate the
>request ID with the corresponding operation response.
>
>  I also changed the batchRequest and batchResponse so that there is an
>optional request ID with each operation (instead of an optional and
>unlimited number of requestIDs associated with the batchRequest and
>batchResponse as a whole).
>
>Rob
>
>----------------------------------------------------------------
>To subscribe or unsubscribe from this elist use the subscription
>manager: <http://lists.oasis-open.org/ob/adm.pl>
>

-- 
Pete Rowley
Developer
Netscape Directory Server





[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC