On Aug 13, 2008, at 8:07 AM, Mike Edwards wrote:
I've switched to internet style replies to try and make it easier for folk to see the comments I'm making....
also using red
Strategist - Emerging Technologies, SCA & SDO.
Co Chair OASIS SCA Assembly TC.
IBM Hursley Park, Mail Point 146, Winchester, SO21 2JN, Great Britain.
Phone & FAX: +44-1962-818014 Mobile: +44-7802-467431
Jim Marino <firstname.lastname@example.org> wrote on 08/08/2008 17:10:49:
> [image removed]
> Re: [sca-j] Another early morning brainstorm - conversations revisited
> Jim Marino
> OASIS Java
> 08/08/2008 17:12
> On Aug 8, 2008, at 8:34 AM, Mike Edwards wrote:
> I don't think we've yet got a meeting of minds on the problems here.
> Perhaps maybe we are talking past one another. I'll address each of
> your points below. It may be the case that we need to discuss on the phone.
> "My logic does cover this, particularly steps 7-9. However, server-
> side instantiation can be done lazily in most cases. "
> This misses my major point.
> For a conversation, it isn't the creation of an instance (etc) that
> matters. It is the creation of the server-side context, which takes
> place DURING the processing of the forward operation that
> establishes the conversation. The vital thing is that this *MUST*
> occur before any further forward operations are handled. And my
> point is that in the general case, the order of forward operations is *VITAL*.
> Then you agree with me. I believe my logic outlined previously
> handles this case. I may be wrong though, so perhaps you can point
> out issues in what I have outlined?
I don't see anything in your proposal which guarantees that the "first" operation - ie the one
that starts the conversation - actually is the first one to reach the server. But it had better be
otherwise there is no server state to which any subsequent operations can relate.
I think there are a number of distinct issues here...
First, I'm not sure I follow what you mean by "server state". Is it "conversation state"? As I mention below, conversation state does not need to be materialized in memory while a conversation is active as long as consistency is maintained. There would be consistency if an implementation instance were created after the id generation but before the first *arriving* invocation was dispatched to the instance. I picture conversation start as allocating a conversation but not requiring materialization of in-memory state.
The second issue is operation invocation ordering. I would argue this is a requirement imposed by the application, not the runtime. As there is no need to have an instance materialized in memory on conversation start, and a runtime should allocate provider instances in a thread-safe manner when dispatching the first operation, there is no need for a runtime to impose guaranteed ordering.
If the first invocation must be guaranteed to arrive at the server first, then this is an application requirement. This may be because the service requires specific data first or because it is coded in a certain way. When this is the case, the application should either use ordered messaging or make the operation synchronous *and* ensure a client does not call operations out-of-order.
Given this, I'm not sure why the the *runtime* would require ordered delivery in the cases we are discussing. One case where ordered delivery might be required is to support clustered conversations. However, we're not talking about that case here.
Having said all of this, if an SCA implementation wanted to create an instance when the conversation were started, using my model, it could certainly do so. This would be done as part of the synchronous id generation phase.
> So - how do we construct a model that deals with this?
> "1. It will not work with transacted messaging"
> OK, so let's examine the case of transacted messaging.
> My first observation is that in the case of transacted messaging,
> ALL the forward operations involved are going to have to be one-way
> The second observation is that since the forward messages don't get
> sent until the transaction commits, if there are multiple forward
> operations, then the second and subsequent calls can't depend on any
> data being returned from the earlier operations (eg via callbacks).
> That isn't to say that there can't be any callbacks - just that they
> cannot be received until the transaction commits.
> Yes! And it means we can't rely on a CallableReference (or other
> message) sent to a repy-to queue.
Agreed, but it is more brutal than that. In this transacted case, the client can't actually depend
on anything from the server when making its forward requests. So all that you have is a series
of forward one-way operations where the input to each operation is actually independent
of the other operations.
We need to be careful here with terminology - they are not independent of each other as they succeed or fail together.
Frankly, if this were the case, I suggest a redesign of the service interface to have a single
"coarse" operation that contains all of the data that would otherwise have gone into the sequence
of one-way operations. There is no benefit whatsoever to having separate forward operations.
This isn't possible in many cases. For example, when a conversational client is invoked several times within a transaction that results in forward calls to the conversational provider. If request/response were used over transacted messaging in those scenarios, the application would hang.
> I still believe that in the general case, the first "conversation
> starting" operation must complete before any of the following
> operations can proceed. Anything else leads to chaos.
> So, I think that this would place the following requirements on the
> server side of things (possibly shared between infrastructure and
> service implementation):
> - the "conversation start" operation must be identified and dealt
> with specially
> - the conversation start operation must complete before any other
> operations proceed
> Yes! But I note that the "conversation start" call is not a service
> operation. It is performed by the SCA implementation and is never
> visible to application code. Also, it never results in an invocation
> of a service operation.
This is where I really part company with you. For you, it seems that a conversation is something that is
independent of the state of the service implementation. For me, the conversation is ALL ABOUT the
state data in the service implementation. In my way of thinking you can't create a conversation unless
there is creation of the state data inside the service implementation. So to envisage "conversation
start" as being something that the infrastructure code can do on its own makes no sense whatsoever.
I think you are conflating the existence of an implementation instance and materialized state in memory with conversation start. While a conversation is "active" there may not be any state or associated instance held in memory. For long-running conversations, I expect this to frequently be the case: a runtime may passivate an instance or use other means to remove state from memory.
The "start conversation" sequence where an id is generated in my model is equivalent to this: there is a conversation but there does not need to be any instance (or state) in memory. This does mean that any initialization method (i.e. marked with @Init) will not be called specifically when this happens, but it can be guaranteed to happen before the first invocation, which preserves initialization semantics for conversation-scoped instances.
I think that this is one of the key issues - possibly THE key issue - separating us.
Yep, one of them.
> ONE WAY to achieve this would be to require:
> - forward messages to be handled in strict order (Yes, only the
> first one needs this, but we don't have a way to only request one
> message to be in-order)
> - only one operation at a time can be processed on the server for a
> given conversation (again, this strictly only need apply to the
> first operation in the conversation)
> (I think that this can only be accomplished if the infrastructure
> code does some special work - for subsequent operations an instance
> could use locking, but it is
> very hard to do this for the initial operation since there is a time
> window between instance creation and getting the lock during
> processing of the 1st operation).
> I don't believe so. Again, I may be mistaken but I believe the logic
> I outlined handles locking and does not require ordered messaging.
> Steps 7-9 of the second scenario (plum orders arriving before apple
> orders) demonstrates how this can be handled.
You only lock to create an ID. How did this help the service implementation?
It is not clear to me from your example.
If locking is done to guarantee receipt of the conversation id and address of the service provider on the client, then the only thing the service provider runtime must do is guarantee an implementation instance is allocated in a thread-safe manner.
> " 2. It ties service provider implementations and clients to SCA and
> exposes implementation details in the service design
> 3. It will make interop extremely difficult "
> I don't understand either of these comments - you will need to
> explain them in more detail.
> Both I and Meeraj sent emails earlier in the thread about this. It
> would be helpful if you could go back and respond to those in line
> rather than repeat what we wrote here.
Can you reply with URLs of the emails that you are referring to. It has got too long a chain for me
to work out which email(s) you are referring to.
I don't have the URLs handy but I would search the thread subject in your email client. I sent a detailed note on the 29th and Meeraj posted a message on July 30th. There are other comments in the thread that are probably also worth reviewing.
> I think that the requirements that you are laying down above are
> putting an extreme amount of expectation on the infrastructure, if
> the client and the service implementations are to be "unaware" of
> all that is going on. There are also piles of unexpressed semantics
> involved too, which would have to be part of the design of both
> client and of the provider.
> I'm not sure about that. I believe most of this can be done in a
> fairly lightweight fashion. For things like ordered delivery (which
> would be one way of handling conversationality in a cluster but
> certainly not required to implement what we are discussing here), we
> fortunately have things like shared disks and Websphere MQ that do
> most of the heavy lifting :-)
> Finally, these requirements are the things that make interop
> difficult - where in the specs for interoperable protocols are the
> place(s) for holding info like the conversation ID which also imply
> the semantics I've laid out above?
> Interop is going to be difficult here enough without complicating
> things by introducing an SCA type in service operation signatures.
> For example, how would a non-SCA JMS, JAX-WS or .NET client respond
> to a CallableReference? I don't think it can be mapped to some type
> of endpoint address as it contains behavior (i.e. the ability to
> invoke on it).
I agree regarding the idea of sending a CallableReference. I'm not in favour of that.
But what about the conversationID that you need to send places?
That would need to be represented in suitable header information for the transport. I am being purposely vague here as I believe the conversation id should be decoupled from the transport while how the conversation id is represented will be transport and implementation-specific.
In the case outlined above, the client would need to perform the functions of a reference proxy, which assumes the conversation id is consumable in the client's implementation language. I imagine doing this is not trivial but something similar has been done with the JAX-WS RI. It includes a "session" example in the distribution if you are curious.