OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

sca-j message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [sca-j] Another early morning brainstorm - conversations revisited



Jim,

I've switched to internet style replies to try and make it easier for folk to see the comments I'm making....

also using red

Yours,  Mike.

Strategist - Emerging Technologies, SCA & SDO.
Co Chair OASIS SCA Assembly TC.
IBM Hursley Park, Mail Point 146, Winchester, SO21 2JN, Great Britain.
Phone & FAX: +44-1962-818014    Mobile: +44-7802-467431  
Email:  mike_edwards@uk.ibm.com


Jim Marino <jim.marino@gmail.com> wrote on 08/08/2008 17:10:49:

> [image removed]

>
> Re: [sca-j] Another early morning brainstorm - conversations revisited

>
> Jim Marino

>
> to:

>
> OASIS Java

>
> 08/08/2008 17:12

>
> On Aug 8, 2008, at 8:34 AM, Mike Edwards wrote:

>
>
> Jim,
>
> I don't think we've yet got a meeting of minds on the problems here.

>
> Perhaps maybe we are talking past one another. I'll address each of
> your points below. It may be the case that we need to discuss on the phone.

>
> 1)
> "My logic does cover this, particularly steps 7-9. However, server-
> side instantiation can be done lazily in most cases. "
>
> This misses my major point.
> For a conversation, it isn't the creation of an instance (etc) that
> matters.  It is the creation of the server-side context, which takes
> place DURING the processing of the forward operation that
> establishes the conversation.  The vital thing is that this *MUST*
> occur before any further forward operations are handled.  And my
> point is that in the general case, the order of forward operations is *VITAL*.

>
> Then you agree with me. I believe my logic outlined previously
> handles this case. I may be wrong though, so perhaps you can point
> out issues in what I have outlined?


I don't see anything in your proposal which guarantees that the "first"  operation -  ie the one
that starts the conversation - actually is the first one to reach the server.  But it had better be
otherwise there is no server state to which any subsequent operations can relate.


>  
> So - how do we construct a model that deals with this?
>
> 2)
> "1. It will not work with transacted messaging"
>
> OK, so let's examine the case of transacted messaging.
>
> My first observation is that in the case of transacted messaging,
> ALL the forward operations involved are going to have to be one-way
> operations.
> The second observation is that since the forward messages don't get
> sent until the transaction commits, if there are multiple forward
> operations, then the second and subsequent calls can't depend on any
> data being returned from the earlier operations (eg via callbacks).
> That isn't to say that there can't be any callbacks - just that they
> cannot be received until the transaction commits.

>
> Yes! And it means we can't rely on a CallableReference (or other
> message) sent to a repy-to queue.

>

Agreed, but it is more brutal than that.  In this transacted case, the client can't actually depend
on anything from the server when making its forward requests.  So all that you have is a series
of forward one-way operations where the input to each operation is actually independent
of the other operations.

Frankly, if this were the case, I suggest a redesign of the service interface to have a single
"coarse" operation that contains all of the data that would otherwise have gone into the sequence
of one-way operations.  There is no benefit whatsoever to having separate forward operations.

> I still believe that in the general case, the first "conversation
> starting" operation must complete before any of the following
> operations can proceed.  Anything else leads to chaos.
>
> So, I think that this would place the following requirements on the
> server side of things (possibly shared between infrastructure and
> service implementation):
>
> - the "conversation start" operation must be identified and dealt
> with specially
> - the conversation start operation must complete before any other
> operations proceed

>
> Yes! But I note that the "conversation start" call is not a service
> operation. It is performed by the SCA implementation and is never
> visible to application code. Also, it never results in an invocation
> of a service operation.


This is where I really part company with you.  For you, it seems that a conversation is something that is
independent of the state of the service implementation.  For me, the conversation is ALL ABOUT the
state data in the service implementation.  In my way of thinking you can't create a conversation unless
there is creation of the state data inside the service implementation.  So to envisage "conversation
start" as being something that the infrastructure code can do on its own makes no sense whatsoever.

I think that this is one of the key issues - possibly THE key issue - separating us.

>
> ONE WAY to achieve this would be to require:
> - forward messages to be handled in strict order (Yes, only the
> first one needs this, but we don't have a way to only request one
> message to be in-order)
> - only one operation at a time can be processed on the server for a
> given conversation (again, this strictly only need apply to the
> first operation in the conversation)
> (I think that this can only be accomplished if the infrastructure
> code does some special work - for subsequent operations an instance
> could use locking, but it is
> very hard to do this for the initial operation since there is a time
> window between instance creation and getting the lock during
> processing of the 1st operation).

>
> I don't believe so. Again, I may be mistaken but I believe the logic
> I outlined handles locking and does not require ordered messaging.
> Steps 7-9 of the second scenario (plum orders arriving before apple
> orders) demonstrates how this can be handled.

>  

You only lock to create an ID.  How did this help the service implementation?
It is not clear to me from your example.


> 3)
> " 2. It ties service provider implementations and clients to SCA and
> exposes implementation details in the service design
> 3. It will make interop extremely difficult "
>
> I don't understand either of these comments - you will need to
> explain them in more detail.

>
> Both I and Meeraj sent emails earlier in the thread about this. It
> would be helpful if you could go back and respond to those in line
> rather than repeat what we wrote here.

>

Can you reply with URLs of the emails that you are referring to.  It has got too long a chain for me
to work out which email(s) you are referring to.

> I think that the requirements that you are laying down above are
> putting an extreme amount of expectation on the infrastructure, if
> the client and the service implementations are to be "unaware" of
> all that is going on.  There are also piles of unexpressed semantics
> involved too, which would have to be part of the design of both
> client and of the provider.

>
> I'm not sure about that. I believe most of this can be done in a
> fairly lightweight fashion. For things like ordered delivery (which
> would be one way of handling conversationality in a cluster but
> certainly not required to implement what we are discussing here), we
> fortunately have things like shared disks and Websphere MQ that do
> most of the heavy lifting :-)

>
> Finally, these requirements are the things that make interop
> difficult - where in the specs for interoperable protocols are the
> place(s) for holding info like the conversation ID which also imply
> the semantics I've laid out above?

>
> Interop is going to be difficult here enough without complicating
> things by introducing an SCA type in service operation signatures.
> For example, how would a non-SCA JMS, JAX-WS or .NET client respond
> to a CallableReference? I don't think it can be mapped to some type
> of endpoint address as it contains behavior (i.e. the ability to
> invoke on it).


I agree regarding the idea of sending a CallableReference.  I'm not in favour of that.
But what about the conversationID that you need to send places?

>
> Jim

>
>
>
> Yours,  Mike.
>
> Strategist - Emerging Technologies, SCA & SDO.
> Co Chair OASIS SCA Assembly TC.
> IBM Hursley Park, Mail Point 146, Winchester, SO21 2JN, Great Britain.
> Phone & FAX: +44-1962-818014    Mobile: +44-7802-467431  
> Email:  mike_edwards@uk.ibm.com
>

>
> From:

>
> Jim Marino <jim.marino@gmail.com>

>
> To:

>
> OASIS Java <sca-j@lists.oasis-open.org>

>
> Date:

>
> 08/08/2008 14:10

>
> Subject:

>
> Re: [sca-j] Another early morning brainstorm - conversations revisited

>
>
>
>
>
> Hi Mike,
>
> Comments inline.
>
> Jim
>
> On Aug 8, 2008, at 5:21 AM, Mike Edwards wrote:
>
>
> Folks,
>
> If going to reply to Jim's last note by copying out the relevant
> pieces, so that it is easier to follow what is being debated....
>
> ---------------------------- Jim's stuff
> Sorry, I  didn't explain well. Let me try with an example. The
> current programming model will work fine in the case you outline
> above, that is, when an invocation arrives out of order as long as
> the creation of the conversation id is a synchronous event from the
> perspective of the client reference proxy. Let's start by taking a
> simplistic SCA implementation and look at the sequence of events
> that would happen:
>
> 1. Application code invokes a reference proxy that represents a
> conversational service by calling the orderApples operation
> 2. The reference proxy knows a conversation has not been started so
> acquires a write lock for the conversation id and *synchronously*
> performs some work to get a conversation id. Since this is a
> simplistic implementation, it generates a UUID. and caches it, then
> releases the write lock. From this point on, the conversation id is
> available to the reference proxy.
> 3. The reference proxy then invokes the orderApples operation over
> some transport, flowing the id and invocation parameters
> 4. The reference proxy returns control to the application logic
> 5. At some later time, the orderApples invocation arrives in the
> runtime hosting the target service
> 6. If the target instance is not created or initialized, the runtime does so
> 7. The runtime dispatches the orderApples invocation to the target instance.  
>
> Now let's assume the client invokes both orderApples and orderPlums
> in that order, which are non blocking. Let's also assume ordered
> messaging is not used (e.g. the dispatch is in-VM using a thread
> pool) and for some reason orderPlums is delivered to the target
> runtime before orderApples. Steps 1-4 from above remain the same. Then:
>
> 5. The application logic invokes orderPlums.
> 6. The reference proxy acquires a read lock to read the conversation
> id which it obtains immediately. It then releases the lock and flows
> the invocation data and id over some transport.  
> 7. The orderPlums request arrives, along with the conversation id
> created in step 2 before the orderApples invocation.  
> 8. The target instance does not exist, so the runtime instantiates
> and initializes it
> 9. The orderApples invocation containing the operation parameters
> and same conversation id arrives on the target runtime . At this
> point the target instance is already created so the runtime
> dispatches to it.  
>
> Note that the read/write lock would not be necessary for clients
> that are stateless components since they are thread-safe.
>
> In the above cases, the creation of a conversation id is orthogonal
> to the creation of a provider instance and the former always
> completes prior to an instance being created. If we were to replace
> the conversation generation algorithm (UUID) with something that was
> more complex (e.g. called out to the service provider runtime) the
> same sequence would hold.  
>
> Also, the above sequence solves the problem of using conversational
> services with transacted messaging that arises by forcing a request-
> reply pattern to be part of the forward service contract.
> ------------------------------ end of Jim's stuff
>
> The thing that this logic avoids discussing is that for there to be
> a conversation, something has to get started on the server side - in
> my opinion, it is ESSENTIALLY a server-side concept.
>
> My logic does cover this, particularly steps 7-9. However, server-
> side instantiation can be done lazily in most cases. I would expect
> many SCA implementations to do this or be required to do this
> depending on the transport used. In this case, think of the
> synchronous id generation as being a guarantee on the part of the
> SCA infrastructure that an instance will be created and that it will
> maintain consistency should multiple invocations arrive out of
> order. This is kind of like 2PC but a lot easier to implement.
>
> From the application developer's perspective, it is even simpler and
> they are comfortably shielded from all of this as shown in this codesnippet:
>
> public void doIt() {
>    orderService.orderApples(12);
>    orderService.orderPlums(12);
> }
>  
>
>
> The idea is that in a conversation, when a second operation
> invocation happens on the server side, it does so in the context of
> some state that was established by a first operation.
> Instances, etc, are not the main point - they are only one means to
> achieve this end.
>
> I'll note that is entirely consistent with my proposal.
>
> It is important to understand that Op A followed by Op B may make
> some sense, while Op B followed by Op A may make no sense at all, in
> terms of a conversation.
> eg:
> Op A = "Start an order"
> Op B = "Add to an order"  - called before Op A, the response is
> going to be "what Order?"
>
> In principle, I think that an operation that STARTS a conversation
> is special, in that it must do something on the server before the
> conversation can be said to be "in progress".
> At the very least it must set up whatever the state is that is the
> context for future operations.
>
> The only obvious way to guarantee this is to ensure that the server-
> side operation that starts the conversation is complete before any
> further operations are done within the same conversation.
>
> Note that my reasoning here does not depend on ANY mechanics - I
> think that it is true whatever the mechanics.
>
> ONE WAY to achieve the required behaviour is to require that any
> operation which is to start a conversation is treated as a
> Synchronous operation - so that it is guaranteed to be complete
> before the next operation is invoked.  This approach is at least
> simple to understand at a programming level and I believe that it is
> straightforward for binding implementers to handle.  It is easy to
> warn a client programmer that they must call operation X first and
> must not call any other operations on the same reference proxy until
> it completes (eg if they are multi-threading)
>
>
> The statement, "ONE WAY to achieve the required behaviour is to
> require that any operation which is to start a conversation is
> treated as a Synchronous operation" is exactly what I am proposing.
> However, my point is that operation should not by a *service*
> operation. Rather, it is something done by the SCA infrastructure.
> If that is not done, and SCA makes a requirement that all
> conversational services have a synchronous request/reply start
> operation, SCA will *not work* with transacted messaging. This means
> it is not straightforward to handle for many binding implementations
> based on message providers such as a JMS implementation, Oracle AQ,
> or Websphere MQ. That, IMO, is a show-stopper since transacted
> messaging is one of the key use cases for SCA and I would venture to
> say much more widely used than Web Services in enterprise computing.
>  
> I'll also note that with my proposal, it is entirely possible for an
> application developer to write their service using synchronous
> request-reply but it will not work with multiple invocations using
> transacted messaging. They would either have to use auto acknowledge
> or commit on the session for each invocation.
>  
>  
> - other approaches might be possible, but seem more complex to me.  
> For example, require strict sequencing of the forward messages ("in
> order", "once and only once"), allied to a requirement to actually
> process messages after the first only once processing of the first
> is complete.
>
>
> Service contract operation sequencing is a separate concern than
> allocation of a conversation id or instance creation. I have a
> simple solution to enabling basic service contract operation
> sequencing that I will cover in my response to Simon's email from
> this morning. I will try and send something today or tomorrow.
>
> <snip/>
>
> Expanding on point 2, requiring a conversation to be initiated by a
> synchronous operation *to the service* cannot work over JMS when the
> client is using transacted messaging since messages are not sent
> until after the transaction has committed This is a very common
> messaging scenario. Assuming the callback is handled via a reply-to
> queue that the client listens on, the consumer only receives
> enqueued messages when a transaction commits, thereby inhibiting the
> client from receiving a response.  If in the original above example
> the client is participating in a global transaction and
> OrderService.startNewOrder() returns a CallableReference or a proxy,
> the client will hang as the forward message will not be received
> until the transaction commits (which won't occur since the client
> would be listening on the reply-to queue).
>
> To avoid this, I imagine most JMS binding implementations would use
> the mechanisms as described by the JMS binding spec and pass a
> conversation id in the message header.
>
> Therefore, I believe your proposal won't work in these scenarios.
>
> <scn3>I understand your point about JMS.  However, you haven't
> addressed my other point about the conversational provider instance
> needing to be created and initialized before further invocations are
> made on it.  For transports that provide reliable queued in-order
> delivery of messages (the JMS case) the transport can take care of
> this.  For other transports, the first invocation must execute and
> complete before the second one can occur.  This serialization needs
> to be handled somehow, either by the application or by the
> infrastructure.</scn3>
>
> Sorry, I  didn't explain well. Let me try with an example. The
> current programming model will work fine in the case you outline
> above, that is, when an invocation arrives out of order as long as
> the creation of the conversation id is a synchronous event from the
> perspective of the client reference proxy. Let's start by taking a
> simplistic SCA implementation and look at the sequence of events
> that would happen:
>
> 1. Application code invokes a reference proxy that represents a
> conversational service by calling the orderApples operation
> 2. The reference proxy knows a conversation has not been started so
> acquires a write lock for the conversation id and *synchronously*
> performs some work to get a conversation id. Since this is a
> simplistic implementation, it generates a UUID. and caches it, then
> releases the write lock. From this point on, the conversation id is
> available to the reference proxy.
> 3. The reference proxy then invokes the orderApples operation over
> some transport, flowing the id and invocation parameters
> 4. The reference proxy returns control to the application logic
> 5. At some later time, the orderApples invocation arrives in the
> runtime hosting the target service
> 6. If the target instance is not created or initialized, the runtime does so
> 7. The runtime dispatches the orderApples invocation to the target instance.
>
> Now let's assume the client invokes both orderApples and orderPlums
> in that order, which are non blocking. Let's also assume ordered
> messaging is not used (e.g. the dispatch is in-VM using a thread
> pool) and for some reason orderPlums is delivered to the target
> runtime before orderApples. Steps 1-4 from above remain the same. Then:
>
> 5. The application logic invokes orderPlums.
> 6. The reference proxy acquires a read lock to read the conversation
> id which it obtains immediately. It then releases the lock and flows
> the invocation data and id over some transport.
> 7. The orderPlums request arrives, along with the conversation id
> created in step 2 before the orderApples invocation.
> 8. The target instance does not exist, so the runtime instantiates
> and initializes it
> 9. The orderApples invocation containing the operation parameters
> and same conversation id arrives on the target runtime . At this
> point the target instance is already created so the runtime dispatches to it.
>
> Note that the read/write lock would not be necessary for clients
> that are stateless components since they are thread-safe.
>
> In the above cases, the creation of a conversation id is orthogonal
> to the creation of a provider instance and the former always
> completes prior to an instance being created. If we were to replace
> the conversation generation algorithm (UUID) with something that was
> more complex (e.g. called out to the service provider runtime) the
> same sequence would hold.
>
> Also, the above sequence solves the problem of using conversational
> services with transacted messaging that arises by forcing a request-
> reply pattern to be part of the forward service contract.
>
> 2. Clarification on service operation signatures
>
> I'm unclear if by the following the proposal intends to require use
> of CallableReference for conversational interactions:
>
> A simple extension to the model already proposed can solve both
> these problems.  A conversation would be initiated by the service
> creating a CallableReference and returning it to the client.  This
> CallableReference contains an identity for the conversation.  This
> client then makes multiple calls through this CallableReference
> instance.  Because these calls all carry the same identity, a
> conversation-scoped service will dispatch all of them to the same instance.
>
>
> It may have been hard to follow, but the beginning of this
> discussion started with the assertion that a synchronous request-
> reply operation that returned a CallableReference was problematic
> for a number of reasons. Three of the most important are:
>
> 1. It will not work with transacted messaging
> 2. It ties service provider implementations and clients to SCA and
> exposes implementation details in the service design
> 3. It will make interop extremely difficult
>
> I've snipped the above message to include the discussion on point 1.
> Points 2-3 are covered in earlier emails by myself and Meeraj.
>
> Jim
>
>
>
>
>
>
>

> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with
> number 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

>
>
>
>






Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU








[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]