OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

sca-j message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [sca-j] Another early morning brainstorm - conversations revisited

See <scn5>....</scn5>.


Simon C. Nash, IBM Distinguished Engineer
Member of the IBM Academy of Technology
Tel. +44-1962-815156  Fax +44-1962-818999

Jim Marino <jim.marino@gmail.com>

09/08/2008 12:52

OASIS Java <sca-j@lists.oasis-open.org>
Re: [sca-j] Another early morning brainstorm - conversations revisited

Hi Simon,

Comments inline again...

On Aug 8, 2008, at 3:21 AM, Simon Nash wrote:


This is a good discussion that is bringing out many important points.  My latest responses are in <scn4>....</scn4>.

I agree! :-) I think we are covering good ground and making progress.


<scn3>I understand your point about JMS.  However, you haven't addressed my other point about the conversational provider instance needing to be created and initialized before further invocations are made on it.  For transports that provide reliable queued in-order delivery of messages (the JMS case) the transport can take care of this.  For other transports, the first invocation must execute and complete before the second one can occur.  This serialization needs to be handled somehow, either by the application or by the infrastructure.</scn3>

Sorry, I  didn't explain well. Let me try with an example. The current programming model will work fine in the case you outline above, that is, when an invocation arrives out of order as long as the creation of the conversation id is a synchronous event from the perspective of the client reference proxy. Let's start by taking a simplistic SCA implementation and look at the sequence of events that would happen:

1. Application code invokes a reference proxy that represents a conversational service by calling the orderApples operation
2. The reference proxy knows a conversation has not been started so acquires a write lock for the conversation id and *synchronously* performs some work to get a conversation id. Since this is a simplistic implementation, it generates a UUID. and caches it, then releases the write lock. From this point on, the conversation id is available to the reference proxy.
3. The reference proxy then invokes the orderApples operation over some transport, flowing the id and invocation parameters
4. The reference proxy returns control to the application logic
5. At some later time, the orderApples invocation arrives in the runtime hosting the target service
6. If the target instance is not created or initialized, the runtime does so
7. The runtime dispatches the orderApples invocation to the target instance.

Now let's assume the client invokes both orderApples and orderPlums in that order, which are non blocking. Let's also assume ordered messaging is not used (e.g. the dispatch is in-VM using a thread pool) and for some reason orderPlums is delivered to the target runtime before orderApples. Steps 1-4 from above remain the same. Then:

5. The application logic invokes orderPlums.
6. The reference proxy acquires a read lock to read the conversation id which it obtains immediately. It then releases the lock and flows the invocation data and id over some transport.
7. The orderPlums request arrives, along with the conversation id created in step 2 before the orderApples invocation.
8. The target instance does not exist, so the runtime instantiates and initializes it
9. The orderApples invocation containing the operation parameters and same conversation id arrives on the target runtime . At this point the target instance is already created so the runtime dispatches to it.

Note that the read/write lock would not be necessary for clients that are stateless components since they are thread-safe.

In the above cases, the creation of a conversation id is orthogonal to the creation of a provider instance and the former always completes prior to an instance being created. If we were to replace the conversation generation algorithm (UUID) with something that was more complex (e.g. called out to the service provider runtime) the same sequence would hold.

Also, the above sequence solves the problem of using conversational services with transacted messaging that arises by forcing a request-reply pattern to be part of the forward service contract.

<scn4>Now it's my turn to apologize for not explaining well enough.  The sequence above covers the case where either of the two forward calls can validly execute first on the same conversational instance.  The case I am concerned about is where the first call must execute before the second call because it does some necessary prerequisite initialization of the conversational instance.  For example, the first call is "create order" and the subsequent calls add items (apples, plums) to the order that was previously created.  I believe this will be the common case, and in this case it is necessary for the first call in the conversation to be a two-way call.</scn4>

In your example, application initialization logic ("create order") should be performed in an @Init method on the service provider and not be left to the client. Generally, this is because initialization is associated with component lifecycle, which is part of the contract an implementation has with the SCA runtime, not a client.

<scn5>If the initialization needs any information from the client, it would need to happen as part of a client business method call.</scn5>

There may, however, be a requirement for a particular service operation to be invoked before others. This may be to supply data needed to initialize a business process. This is very different than initializing a component instance and should be modeled as part of the service contract since it is a requirement placed on clients, not the SCA runtime.
If a business process needs to be initialized, it should be handled through the following steps:

1. Telling developers to write code that always calls a particular operation first. This can be done in a variety of ways such as through documentation, some type of metadata on the service operation the runtime can use to enforce sequencing, verbally, or not being considerate and just having the service provider throw an error at runtime.

2. Having the first operation be synchronous and throwing a fault if other operations are invoked prior to it. Or, if the operations are asynchronous, using a transport that guarantees ordered delivery.

If a business process needs to be initialized, the service implementation still must guard against a client inadvertently calling operations out of sequence, even if the SCA runtime provides some sort of mechanism on the client side to enforce sequencing such as proprietary operation metadata. This is because a service may be invoked from a non-SCA client if it is bound. If however, it is instance-related initialization, the service implementation need not take any precaution as @Init will always ensure initialization happens before any other operation.

<scn5>I agree that all these approaches are possible.  My main point was that this is a common pattern, and we seem to be agreeing on that.</scn5>

2. Clarification on service operation signatures

I'm unclear if by the following the proposal intends to require use of CallableReference for conversational interactions:

A simple extension to the model already proposed can solve both these problems.  A conversation would be initiated by the service creating a CallableReference and returning it to the client.  This CallableReference contains an identity for the conversation.  This client then makes multiple calls through this CallableReference instance.  Because these calls all carry the same identity, a conversation-scoped service will dispatch all of them to the same instance.

I'm assuming this is just for illustrative purposes and it would be possible for a conversation to be initiated in response to the following client code, which does not use the CallableReference API:

public class OrderClient ... {

protected OrderService service;

public void doIt() {
service.orderPlums(...); // routed to the same target instance


Is this correct?

<scn>In a word, No.  All conversations would need to be initiated by the proposed mechanism of having the server return a CallableReference to the client.  This allows the conversation identity to be generated by the server, not the client.  Several people (e.g., Anish and Mike) have called this out as an issue with the current mechanism for conversations.</scn>

Sorry for being so thick, but I don't see why the above could not be supported using "server" generation of conversation ids. We should be careful here to specify what we mean by "server", and whether invocations are flowing through a wire or a client external to the domain. I don't think the term "server" should necessarily mean "the runtime hosting the service provider." Sometimes this may be the (degenerate) case, but not always.

For communications flowing through wires, the only thing we can likely say is "conversation ids are generated by the domain".  I would imagine most domain implementations would chose an efficient id generation scheme that can be done without context switching on every invocation (e.g. UUID generation provided by the JDK). However, in cases where this efficient identity generation is not possible, I believe SCA infrastructure can support the above code. In this case, the reference proxies would be responsible for making some type of out-of-band request to generate a conversation id to some piece of domain "infrastructure".

<scn2>I'm very uncomfortable with placing this kind of runtime requirement on the SCA domain, which IMO should not take on the responsibilities of a persistence container.  For efficient support of persistent conversational instances with failover, load balancing and transactionality, the ID may need to be generated by a persistence container.  For example, it could be a database key.</scn2>

I think this is less of a requirement than what you are proposing, which also shifts the burden to application code (which should not have to deal with these mundane infrastructure concerns).

The only requirement I am making is the "domain" provides the key. My usage of the term "domain" is intentionally vague: it could be a database, some service hosted in a cluster, or a snippet of code embedded in a Java proxy.  Generating the id can therefore be done using a database key or, more simply, by having a reference proxy use facilities already provided in the JDK 1.5 or greater, which would require one line of code. My proposal would not restrict the SCA infrastructure in how the id is generated, other than it is done synchronously and out-of-band.  

<scn3>I'm concerned about putting too much mechanism into the SCA domain.  I think it needs to support SCA wiring, deployment and configuration.  I'd expect it other middleware that's not part of the SCA domain to provide things like persistence, load balancing and failover.</scn3>

We may have a conceptual difference. I consider domain infrastructure to include any middleware resources used by the SCA implementation. This may include multiple runtimes, databases, messaging providers, JEE app servers etc. An SCA implementation would not necessarily implement persistence, ordered messaging, transaction recovery, etc. Rather, it could use other software to provide those features. For example, an SCA implementation that supports key generation using a database table may only contain a DDL script and code that makes a JDBC call. Given that, I don't think I'm putting anything into the domain.

<scn4>Yes, we seem to have a conceptual difference on what constitutes the SCA domain.  I see the system middleware as providing an SCA domain as well as these other capabilities.  IMO, SCA should not attempt to provide a complete appserver infrastructure (become the new JEE).  Over time we may grow the SCA envelope as systems move towards a more service-oriented internal sturucture, but I would like to take this step by step and not attempt to take on too much in the first round of OASIS specs.</scn4>

I'm not arguing that SCA should replace persistence, messaging, transactional managers, or app servers. In fact, I expect an SCA implementation would "outsource" to (i.e. integrate with) many of those technologies to offer what we commonly refer to as enterprise features. When an SCA implementation does so, it is using those technologies as "resources" and hence I consider them as being part of the domain infrastructure. For example, a component implemented as an EJB may be deployed to an EJB container integrated with an SCA runtime. I would refer to the EJB container as being part of the domain infrastructure. Or, a binding may use a JMS messaging provider to send service invocations to a service. Again, I would consider that provider to be part of the domain infrastructure.

<scn5>I'm OK with these examples, where the other componentry is performing specific tasks that are a fundamental part of the SCA model (i.e., supporting specific implementation types and bindings.)  I'm not so keen to see this expanded to drag in further dependencies that are needed by things whose relationship to SCA fundamentals is not so clear.</scn5>


<scn4>I haven't been able to explain my concern well enough.  Let me lay it out using sequences of steps.  Sequence A is what the client application expects will happen,  Sequence B is what may actually happen in some cases.

Sequence A:
1. The client business logic calls a proxy to start a conversation.
2. The proxy interacts with some system facility to obtain a conversation ID.
3. The proxy invokes the service.
4. The proxy returns to the client.
5. The client business logic obtains the conversation ID from the proxy.
6. The client business logic creates a correlation entry keyed by the conversation ID, for use by callbacks.
7. The service business logic executes and makes a callback to the client.
8. The client callback business logic uses the conversation ID passed by the server to lookup the correct information in the correlation table.

Sequence B:
1. The client business logic calls a proxy to start a conversation.
2. The proxy interacts with some system facility to obtain a conversation ID.
3. The proxy invokes the service.
7. The service business logic executes and makes a callback to the client.
8. The client callback business logic uses the conversation ID passed by the server to lookup the correct information in the correlation table.
4. The proxy returns to the client.
5. The client business logic obtains the conversation ID from the proxy.
6. The client business logic creates a correlation entry keyed by the conversation ID, for use by callbacks.

If steps 7 and 8 happen before steps 4, 5 and 6, the client logic is broken.  The only way to guard against this is to move steps 5 and 6 much earlier, and change what step 2 does, as follows:

Sequence C:
5. The client business logic obtains a conversation ID from the system infrastructure.
6. The client business logic creates a correlation entry keyed by the conversation ID, for use by callbacks.
2. The client business logic associates this conversation ID with a proxy for the service.
1. The client business logic calls a proxy to start a conversation.
3. The proxy invokes the service.
7. The service business logic executes and makes a callback to the client.
8. The client callback business logic uses the conversation ID passed by the server to lookup the correct information in the correlation table.
4. The proxy returns to the client.

Steps 5 and 2 in sequence C require either new APIs or new semantics for existing APIs.  This negates the apparent simplicity advantage of your proposed approach.</scn4>

Thanks for laying this out. Sorry in advance if I've misunderstood what you are getting at so please correct me if I'm missing something obvious.

Unfortunately, I don't think your approach stops Sequence B from happening unless you are imposing the arbitrary requirement that callbacks cannot be made until a sequence of events happen on the client (i.e. it obtains a conversation id and stores it somewhere). If a service provider makes a callback during an initial synchronous call, and if the callback is non-blocking, it may arrive before the forward call returns. If the callback is synchronous, it is guaranteed to arrive before the forward call returns. This means Sequence B may happen in your model as well.

<scn5>By "my approach", do you mean the proposal that started this thread?  This is equivalent to sequence C, with step 5 representing the initial call and step 6 representing logic that the client would perform after this call.  The initial call that sets up the conversation would not make any callbacks, as you say.  My point here is that sequence C is required, and we need to find some way to guarantee it.</scn5>

The issue here is not the two programming models but rather bad programming. Both the client and service provider implementations have failed to properly design their service contracts and account for callbacks (and potentially asynchrony as well). One way to guard against this scenario from happening is to deal with it at the application level. This is what you effectively show in Sequence C. There are also two other simpler ways to handle this problem, one at the application level, the other through facilities entirely provided by the runtime (and not left to application code).
Sequence B can be avoided in my model at the application level by having the service semantics (the contract between the client and provider) prohibit callbacks until a particular forward service operation has been invoked.

<scn5>This is complex to enforce, because SCA would need to override the normal transport-level dispatching semantics.  It also does not work, because of the need for steps 5 and 6 to occur before step 8.  The semantics you are proposing would ensure that step 8 happens after step 4, but they would not ensure that step 8 happens after steps 5 and 6.  There is nothing that infrastructure can reasonably do to ensure that step 8 happens after steps 5 and 6, because it does not know when step 6 has completed.</scn5>

 This does not necessarily have to be the first operation. In addition, callback methods would need to check against a callback being issued incorrectly (i.e. before the particular forward call), reporting an exception or taking some other action.

For example, a client may wish to store a conversation id in a database and issue a forward "proceed" invocation, as in:

// get the conversation id this will vary based on Simon's or my approach, so it is commented out
// store the conversation id in a database

If the service provider does not issue a callback until the proceed operation happens, the code will work correctly. If a callback is issued before proceed is called, an exception should be issued as it is a programming error. Performing the check to ensure a callback is not performed prematurely is a trivial operation. This would involve performing a null check when correlating the conversation id, which should be done anyway.

<scn5>You are now effectively describing the approach I proposed at the start of this thread, with an additional synchronous forward call.</scn5>

My approach can also be used in conjunction with policy to have the runtime guarantee Sequence B does not happen. If we have the above use transacted messaging, where a transaction is begun prior to startConversation() and committed after proceed(), there is no possibility that Sequence B can happen. This is because the messages will not be received by the provider until the transaction commits, which happens after the conversation id has been persisted (either to a database or some in-memory holder). I'll also note that in order for the conversation id to be obtained using a messaging provider there must be a mechanism that is not tied to message delivery and hence a separate API invocation. Fortunately, this can be as easy as doing the following after the "startConversation()" invocation above:


Of course this is not to say my approach requires a transactional messaging provider. Recalling the previous option, suitable measures can be taken at the application level to avoid Sequence B from happening in a very simple runtime.

<scn5>The use of transacted messaging requires a series of one-way calls with no callbacks until after the transadtion has committed.  This is very different from the scenario we have been discussing, with two-way operations that may invoke callbacks.</scn5>

Given this, I still believe my "simplicity advantage" holds while avoiding the perils of Sequence B, either by taking appropriate measures at the application level or through runtime enforcement via policy (on runtimes that can take advantage of transacted messaging).  Moreover, I would argue the key issue we need to consider with the two approaches is whether each supports the range of requirements typical applications are likely to place on an SCA runtime. In this context, the approach I am advocating works with transacted messaging, whereas requiring a synchronous request-response forward operation to commence a conversation does not. In my experience, loosely coupled, transacted messaging is a cornerstone of enterprise applications and we should not be codifying an API in SCA that prohibits its use.

<scn5>The transacted messaging case should be considered, but I don't think it should drive the whole discussion.  As I said above, it makes some very specific assumptions about the interaction between service and client.  I don't think this is a very common case amongst all the situations where SCA conversations and callbacks could be used.  We might be able to fine tune my proposal to provide a solution for this aspect, while leaving the other parts of the approach in place.</scn5>


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]