OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

sca-j message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [sca-j] Another early morning brainstorm - conversations revisited


Hi Mike,

Comments inline.

Jim

On Aug 8, 2008, at 5:21 AM, Mike Edwards wrote:


Folks,

If going to reply to Jim's last note by copying out the relevant pieces, so that it is easier to follow what is being debated....

---------------------------- Jim's stuff
 Sorry, I  didn't explain well. Let me try with an example. The current programming model will work fine in the case you outline above, that is, when an invocation arrives out of order as long as the creation of the conversation id is a synchronous event from the perspective of the client reference proxy. Let's start by taking a simplistic SCA implementation and look at the sequence of events that would happen:

1. Application code invokes a reference proxy that represents a conversational service by calling the orderApples operation
2. The reference proxy knows a conversation has not been started so acquires a write lock for the conversation id and *synchronously* performs some work to get a conversation id. Since this is a simplistic implementation, it generates a UUID. and caches it, then releases the write lock. From this point on, the conversation id is available to the reference proxy.
3. The reference proxy then invokes the orderApples operation over some transport, flowing the id and invocation parameters
4. The reference proxy returns control to the application logic
5. At some later time, the orderApples invocation arrives in the runtime hosting the target service
6. If the target instance is not created or initialized, the runtime does so
7. The runtime dispatches the orderApples invocation to the target instance. 

Now let's assume the client invokes both orderApples and orderPlums in that order, which are non blocking. Let's also assume ordered messaging is not used (e.g. the dispatch is in-VM using a thread pool) and for some reason orderPlums is delivered to the target runtime before orderApples. Steps 1-4 from above remain the same. Then:

5. The application logic invokes orderPlums.
6. The reference proxy acquires a read lock to read the conversation id which it obtains immediately. It then releases the lock and flows the invocation data and id over some transport. 
7. The orderPlums request arrives, along with the conversation id created in step 2 before the orderApples invocation. 
8. The target instance does not exist, so the runtime instantiates and initializes it
9. The orderApples invocation containing the operation parameters and same conversation id arrives on the target runtime . At this point the target instance is already created so the runtime dispatches to it. 

Note that the read/write lock would not be necessary for clients that are stateless components since they are thread-safe.

In the above cases, the creation of a conversation id is orthogonal to the creation of a provider instance and the former always completes prior to an instance being created. If we were to replace the conversation generation algorithm (UUID) with something that was more complex (e.g. called out to the service provider runtime) the same sequence would hold. 

Also, the above sequence solves the problem of using conversational services with transacted messaging that arises by forcing a request-reply pattern to be part of the forward service contract.
------------------------------ end of Jim's stuff


The thing that this logic avoids discussing is that for there to be a conversation, something has to get started on the server side - in my opinion, it is ESSENTIALLY a server-side concept.

My logic does cover this, particularly steps 7-9. However, server-side instantiation can be done lazily in most cases. I would expect many SCA implementations to do this or be required to do this depending on the transport used. In this case, think of the synchronous id generation as being a guarantee on the part of the SCA infrastructure that an instance will be created and that it will maintain consistency should multiple invocations arrive out of order. This is kind of like 2PC but a lot easier to implement.

From the application developer's perspective, it is even simpler and they are comfortably shielded from all of this as shown in this code snippet:

public void doIt() {
   orderService.orderApples(12);
   orderService.orderPlums(12);
}
  


The idea is that in a conversation, when a second operation invocation happens on the server side, it does so in the context of some state that was established by a first operation.
Instances, etc, are not the main point - they are only one means to achieve this end.

I'll note that is entirely consistent with my proposal.

It is important to understand that Op A followed by Op B may make some sense, while Op B followed by Op A may make no sense at all, in terms of a conversation.
eg:
Op A = "Start an order"
Op B = "Add to an order"  - called before Op A, the response is going to be "what Order?"

In principle, I think that an operation that STARTS a conversation is special, in that it must do something on the server before the conversation can be said to be "in progress".
At the very least it must set up whatever the state is that is the context for future operations.

The only obvious way to guarantee this is to ensure that the server-side operation that starts the conversation is complete before any further operations are done within the same conversation.

Note that my reasoning here does not depend on ANY mechanics - I think that it is true whatever the mechanics.

ONE WAY to achieve the required behaviour is to require that any operation which is to start a conversation is treated as a Synchronous operation - so that it is guaranteed to be complete before the next operation is invoked.  This approach is at least simple to understand at a programming level and I believe that it is straightforward for binding implementers to handle.  It is easy to warn a client programmer that they must call operation X first and must not call any other operations on the same reference proxy until it completes (eg if they are multi-threading)


The statement, "ONE WAY to achieve the required behaviour is to require that any operation which is to start a conversation is treated as a Synchronous operation" is exactly what I am proposing. However, my point is that operation should not by a *service* operation. Rather, it is something done by the SCA infrastructure. If that is not done, and SCA makes a requirement that all conversational services have a synchronous request/reply start operation, SCA will *not work* with transacted messaging. This means it is not straightforward to handle for many binding implementations based on message providers such as a JMS implementation, Oracle AQ, or Websphere MQ. That, IMO, is a show-stopper since transacted messaging is one of the key use cases for SCA and I would venture to say much more widely used than Web Services in enterprise computing. 
 
I'll also note that with my proposal, it is entirely possible for an application developer to write their service using synchronous request-reply but it will not work with multiple invocations using transacted messaging. They would either have to use auto acknowledge or commit on the session for each invocation. 
 
 
- other approaches might be possible, but seem more complex to me.  For example, require strict sequencing of the forward messages ("in order", "once and only once"), allied to a requirement to actually process messages after the first only once processing of the first is complete.


Service contract operation sequencing is a separate concern than allocation of a conversation id or instance creation. I have a simple solution to enabling basic service contract operation sequencing that I will cover in my response to Simon's email from this morning. I will try and send something today or tomorrow.

<snip/>

Expanding on point 2, requiring a conversation to be initiated by a synchronous operation *to the service* cannot work over JMS when the client is using transacted messaging since messages are not sent until after the transaction has committed This is a very common messaging scenario. Assuming the callback is handled via a reply-to queue that the client listens on, the consumer only receives enqueued messages when a transaction commits, thereby inhibiting the client from receiving a response.  If in the original above example the client is participating in a global transaction and OrderService.startNewOrder() returns a CallableReference or a proxy, the client will hang as the forward message will not be received until the transaction commits (which won't occur since the client would be listening on the reply-to queue).

To avoid this, I imagine most JMS binding implementations would use the mechanisms as described by the JMS binding spec and pass a conversation id in the message header.

Therefore, I believe your proposal won't work in these scenarios.

<scn3>I understand your point about JMS.  However, you haven't addressed my other point about the conversational provider instance needing to be created and initialized before further invocations are made on it.  For transports that provide reliable queued in-order delivery of messages (the JMS case) the transport can take care of this.  For other transports, the first invocation must execute and complete before the second one can occur.  This serialization needs to be handled somehow, either by the application or by the infrastructure.</scn3>

Sorry, I  didn't explain well. Let me try with an example. The current programming model will work fine in the case you outline above, that is, when an invocation arrives out of order as long as the creation of the conversation id is a synchronous event from the perspective of the client reference proxy. Let's start by taking a simplistic SCA implementation and look at the sequence of events that would happen:

1. Application code invokes a reference proxy that represents a conversational service by calling the orderApples operation
2. The reference proxy knows a conversation has not been started so acquires a write lock for the conversation id and *synchronously* performs some work to get a conversation id. Since this is a simplistic implementation, it generates a UUID. and caches it, then releases the write lock. From this point on, the conversation id is available to the reference proxy.
3. The reference proxy then invokes the orderApples operation over some transport, flowing the id and invocation parameters
4. The reference proxy returns control to the application logic
5. At some later time, the orderApples invocation arrives in the runtime hosting the target service
6. If the target instance is not created or initialized, the runtime does so
7. The runtime dispatches the orderApples invocation to the target instance.

Now let's assume the client invokes both orderApples and orderPlums in that order, which are non blocking. Let's also assume ordered messaging is not used (e.g. the dispatch is in-VM using a thread pool) and for some reason orderPlums is delivered to the target runtime before orderApples. Steps 1-4 from above remain the same. Then:

5. The application logic invokes orderPlums.
6. The reference proxy acquires a read lock to read the conversation id which it obtains immediately. It then releases the lock and flows the invocation data and id over some transport.
7. The orderPlums request arrives, along with the conversation id created in step 2 before the orderApples invocation.
8. The target instance does not exist, so the runtime instantiates and initializes it
9. The orderApples invocation containing the operation parameters and same conversation id arrives on the target runtime . At this point the target instance is already created so the runtime dispatches to it.

Note that the read/write lock would not be necessary for clients that are stateless components since they are thread-safe.

In the above cases, the creation of a conversation id is orthogonal to the creation of a provider instance and the former always completes prior to an instance being created. If we were to replace the conversation generation algorithm (UUID) with something that was more complex (e.g. called out to the service provider runtime) the same sequence would hold.

Also, the above sequence solves the problem of using conversational services with transacted messaging that arises by forcing a request-reply pattern to be part of the forward service contract.

2. Clarification on service operation signatures

I'm unclear if by the following the proposal intends to require use of CallableReference for conversational interactions:


A simple extension to the model already proposed can solve both these problems.  A conversation would be initiated by the service creating a CallableReference and returning it to the client.  This CallableReference contains an identity for the conversation.  This client then makes multiple calls through this CallableReference instance.  Because these calls all carry the same identity, a conversation-scoped service will dispatch all of them to the same instance.



It may have been hard to follow, but the beginning of this discussion started with the assertion that a synchronous request-reply operation that returned a CallableReference was problematic for a number of reasons. Three of the most important are:

1. It will not work with transacted messaging
2. It ties service provider implementations and clients to SCA and exposes implementation details in the service design
3. It will make interop extremely difficult 

I've snipped the above message to include the discussion on point 1. Points 2-3 are covered in earlier emails by myself and Meeraj.

Jim




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]