sca-j message
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
| [List Home]
Subject: Re: [sca-j] Another early morning brainstorm - conversations revisited
- From: Mike Edwards <mike_edwards@uk.ibm.com>
- To: OASIS Java <sca-j@lists.oasis-open.org>
- Date: Fri, 8 Aug 2008 13:21:33 +0100
Folks,
If going to reply to Jim's last note
by copying out the relevant pieces, so that it is easier to follow what
is being debated....
---------------------------- Jim's stuff
Sorry, I didn't explain well. Let me try with
an example. The current programming model will work fine in the case you
outline above, that is, when an invocation arrives out of order as long
as the creation of the conversation id is a synchronous event from the
perspective of the client reference proxy. Let's start by taking a simplistic
SCA implementation and look at the sequence of events that would happen:
1. Application code invokes a reference proxy that represents
a conversational service by calling the orderApples operation
2. The reference proxy knows a conversation has not been
started so acquires a write lock for the conversation id and *synchronously*
performs some work to get a conversation id. Since this is a simplistic
implementation, it generates a UUID. and caches it, then releases the write
lock. From this point on, the conversation id is available to the reference
proxy.
3. The reference proxy then invokes the orderApples operation
over some transport, flowing the id and invocation parameters
4. The reference proxy returns control to the application
logic
5. At some later time, the orderApples invocation arrives
in the runtime hosting the target service
6. If the target instance is not created or initialized,
the runtime does so
7. The runtime dispatches the orderApples invocation to
the target instance.
Now let's assume the client invokes both orderApples and
orderPlums in that order, which are non blocking. Let's also assume ordered
messaging is not used (e.g. the dispatch is in-VM using a thread pool) and
for some reason orderPlums is delivered to the target runtime before orderApples.
Steps 1-4 from above remain the same. Then:
5. The application logic invokes orderPlums.
6. The reference proxy acquires a read lock to read
the conversation id which it obtains immediately. It then releases the
lock and flows the invocation data and id over some transport.
7. The orderPlums request arrives, along with the conversation
id created in step 2 before the orderApples invocation.
8. The target instance does not exist, so the runtime
instantiates and initializes it
9. The orderApples invocation containing the operation
parameters and same conversation id arrives on the target runtime
. At this point the target instance is already created so the runtime dispatches
to it.
Note that the read/write lock would not be necessary for
clients that are stateless components since they are thread-safe.
In the above cases, the creation of a conversation id
is orthogonal to the creation of a provider instance and the former always
completes prior to an instance being created. If we were to replace the
conversation generation algorithm (UUID) with something that was more complex
(e.g. called out to the service provider runtime) the same sequence would
hold.
Also, the above sequence solves the problem of using conversational
services with transacted messaging that arises by forcing a request-reply
pattern to be part of the forward service contract.
------------------------------ end of Jim's stuff
The thing that this logic avoids discussing
is that for there to be a conversation, something has to get started on
the server side - in my opinion, it is ESSENTIALLY a server-side concept.
The idea is that in a conversation,
when a second operation invocation happens on the server side, it does
so in the context of some state that was established by a first operation.
Instances, etc, are not the main point
- they are only one means to achieve this end.
It is important to understand that Op
A followed by Op B may make some sense, while Op B followed by Op A may
make no sense at all, in terms of a conversation.
eg:
Op A = "Start an order"
Op B = "Add to an order" -
called before Op A, the response is going to be "what Order?"
In principle, I think that an operation
that STARTS a conversation is special, in that it must do something on
the server before the conversation can be said to be "in progress".
At the very least it must set up whatever
the state is that is the context for future operations.
The only obvious way to guarantee this
is to ensure that the server-side operation that starts the conversation
is complete before any further operations are done within the same conversation.
Note that my reasoning here does not
depend on ANY mechanics - I think that it is true whatever the mechanics.
ONE WAY to achieve the required behaviour
is to require that any operation which is to start a conversation is treated
as a Synchronous operation - so that it is guaranteed to be complete before
the next operation is invoked. This approach is at least simple to
understand at a programming level and I believe that it is straightforward
for binding implementers to handle. It is easy to warn a client programmer
that they must call operation X first and must not call any other operations
on the same reference proxy until it completes (eg if they are multi-threading)
- other approaches might be possible,
but seem more complex to me. For example, require strict sequencing
of the forward messages ("in order", "once and only once"),
allied to a requirement to actually process messages after the first only
once processing of the first is complete.
Yours, Mike.
Strategist - Emerging Technologies, SCA & SDO.
Co Chair OASIS SCA Assembly TC.
IBM Hursley Park, Mail Point 146, Winchester, SO21 2JN, Great Britain.
Phone & FAX: +44-1962-818014 Mobile: +44-7802-467431
Email: mike_edwards@uk.ibm.com
From:
| Jim Marino <jim.marino@gmail.com>
|
To:
| OASIS Java <sca-j@lists.oasis-open.org>
|
Date:
| 07/08/2008 23:54
|
Subject:
| Re: [sca-j] Another early morning brainstorm
- conversations revisited |
Simon,
I've commented inline and snipped some of the previous
text to make it easier to read...
<snip/>
I accept that my current proposal
doesn't do this, but tries to provide quite a lot of capability within
the infrastructure. If the TC feels this road is leading to a dead
end then we could start a very different discussion about what additional
things could be delegated to business code.
I don't think it is necessarily leading to a dead end but it would be beneficial
to step back and agree on the main use cases we are trying to achieve as
well as the extent of changes we are willing to make. If we agree
on the use cases, it will be easier to judge the merits of the various
proposals by looking at how application code would need to be written.
Use cases also have the nice effect of keeping scope limited.
<scn3>My main concern about moving the focus of the discussion to
use cases is that we already tried to do this, and the result was the union
of everyone's opinions on what the important use cases are (i.e., no possible
use cases were eliminated.)</scn3>
I still think this would be useful, even if it is a union,
particularly to contrast complexity introduced into application logic by
the various proposals.
<snip/>
What happens if I want to introduce asynchrony (non-blocking
operations) such as:
public interface OrderService {
@OneWay
void startNewOrder();
@OneWay
void orderApples(..);
@OneWay
void orderPlums(..);
}
Does the proposal require interfaces to have a synchronous operation to
start a conversation?
<scn2>Yes it does, and I believe this restriction is needed whether
or not we make API changes. The first operation of a conversation
will create an instance and initialize it for use by subsequent calls within
the same conversation. If the first operation is oneway, the client
can proceed immediately and it might make the second call before the first
call has completed (or even started) executing.</scn2>
I don't believe the current API requires this at all and that behavior
is the opposite of what I am proposing. The reference proxy can simply
make an out-of-band synchronous call to allocate a conversational id. This
may or may not involve out-of-process work. Either way, the proxy does
not return control to the client until an id is generated. The two key
things I am proposing are:
1. How the conversation id is generated does not bleed through to the programming
model
2. Id generation does not require an operation on the target service to
be invoked
Expanding on point 2, requiring a conversation to be initiated by a synchronous
operation *to the service* cannot work over JMS when the client is using
transacted messaging since messages are not sent until after the transaction
has committed This is a very common messaging scenario. Assuming the callback
is handled via a reply-to queue that the client listens on, the consumer
only receives enqueued messages when a transaction commits, thereby inhibiting
the client from receiving a response. If in the original above example
the client is participating in a global transaction and OrderService.startNewOrder()
returns a CallableReference or a proxy, the client will hang as the forward
message will not be received until the transaction commits (which won't
occur since the client would be listening on the reply-to queue).
To avoid this, I imagine most JMS binding implementations would use the
mechanisms as described by the JMS binding spec and pass a conversation
id in the message header.
Therefore, I believe your proposal won't work in these scenarios.
<scn3>I understand your point about JMS. However, you haven't
addressed my other point about the conversational provider instance needing
to be created and initialized before further invocations are made on it.
For transports that provide reliable queued in-order delivery of
messages (the JMS case) the transport can take care of this. For
other transports, the first invocation must execute and complete before
the second one can occur. This serialization needs to be handled
somehow, either by the application or by the infrastructure.</scn3>
Sorry, I didn't explain well. Let me try with an
example. The current programming model will work fine in the case you outline
above, that is, when an invocation arrives out of order as long as the
creation of the conversation id is a synchronous event from the perspective
of the client reference proxy. Let's start by taking a simplistic SCA implementation
and look at the sequence of events that would happen:
1. Application code invokes a reference proxy that represents
a conversational service by calling the orderApples operation
2. The reference proxy knows a conversation has not been
started so acquires a write lock for the conversation id and *synchronously*
performs some work to get a conversation id. Since this is a simplistic
implementation, it generates a UUID. and caches it, then releases the write
lock. From this point on, the conversation id is available to the reference
proxy.
3. The reference proxy then invokes the orderApples operation
over some transport, flowing the id and invocation parameters
4. The reference proxy returns control to the application
logic
5. At some later time, the orderApples invocation arrives
in the runtime hosting the target service
6. If the target instance is not created or initialized,
the runtime does so
7. The runtime dispatches the orderApples invocation to
the target instance.
Now let's assume the client invokes both orderApples and
orderPlums in that order, which are non blocking. Let's also assume ordered
messaging is not used (e.g. the dispatch is in-VM using a thread pool)
and for some reason orderPlums is delivered to the target runtime before
orderApples. Steps 1-4 from above remain the same. Then:
5. The application logic invokes orderPlums.
6. The reference proxy acquires a read lock to read the
conversation id which it obtains immediately. It then releases the lock
and flows the invocation data and id over some transport.
7. The orderPlums request arrives, along with the conversation
id created in step 2 before the orderApples invocation.
8. The target instance does not exist, so the runtime
instantiates and initializes it
9. The orderApples invocation containing the operation
parameters and same conversation id arrives on the target runtime . At
this point the target instance is already created so the runtime dispatches
to it.
Note that the read/write lock would not be necessary for
clients that are stateless components since they are thread-safe.
In the above cases, the creation of a conversation id
is orthogonal to the creation of a provider instance and the former always
completes prior to an instance being created. If we were to replace the
conversation generation algorithm (UUID) with something that was more complex
(e.g. called out to the service provider runtime) the same sequence would
hold.
Also, the above sequence solves the problem of using conversational
services with transacted messaging that arises by forcing a request-reply
pattern to be part of the forward service contract.
2. Clarification on service operation signatures
I'm unclear if by the following the proposal intends to require use of
CallableReference for conversational interactions:
A simple extension to the model already proposed can solve both these problems.
A conversation would be initiated by the service creating a CallableReference
and returning it to the client. This CallableReference contains an
identity for the conversation. This client then makes multiple calls
through this CallableReference instance. Because these calls all
carry the same identity, a conversation-scoped service will dispatch all
of them to the same instance.
I'm assuming this is just for illustrative purposes and it would be possible
for a conversation to be initiated in response to the following client
code, which does not use the CallableReference API:
public class OrderClient ... {
@Reference
protected OrderService service;
public void doIt() {
service.orderApples(...);
service.orderPlums(...); // routed to the same target instance
}
}
Is this correct?
<scn>In a word, No. All conversations would need to be initiated
by the proposed mechanism of having the server return a CallableReference
to the client. This allows the conversation identity to be generated
by the server, not the client. Several people (e.g., Anish and Mike)
have called this out as an issue with the current mechanism for conversations.</scn>
Sorry for being so thick, but I don't see why the above could not be supported
using "server" generation of conversation ids. We should be careful
here to specify what we mean by "server", and whether invocations
are flowing through a wire or a client external to the domain. I don't
think the term "server" should necessarily mean "the runtime
hosting the service provider." Sometimes this may be the (degenerate)
case, but not always.
For communications flowing through wires, the only thing we can likely
say is "conversation ids are generated by the domain". I
would imagine most domain implementations would chose an efficient id generation
scheme that can be done without context switching on every invocation (e.g.
UUID generation provided by the JDK). However, in cases where this efficient
identity generation is not possible, I believe SCA infrastructure can support
the above code. In this case, the reference proxies would be responsible
for making some type of out-of-band request to generate a conversation
id to some piece of domain "infrastructure".
<scn2>I'm very uncomfortable with placing this kind of runtime requirement
on the SCA domain, which IMO should not take on the responsibilities of
a persistence container. For efficient support of persistent conversational
instances with failover, load balancing and transactionality, the ID may
need to be generated by a persistence container. For example, it
could be a database key.</scn2>
I think this is less of a requirement than what you are proposing, which
also shifts the burden to application code (which should not have to deal
with these mundane infrastructure concerns).
The only requirement I am making is the "domain" provides the
key. My usage of the term "domain" is intentionally vague: it
could be a database, some service hosted in a cluster, or a snippet of
code embedded in a Java proxy. Generating the id can therefore be
done using a database key or, more simply, by having a reference proxy
use facilities already provided in the JDK 1.5 or greater, which would
require one line of code. My proposal would not restrict the SCA infrastructure
in how the id is generated, other than it is done synchronously and out-of-band.
<scn3>I'm concerned about putting too much mechanism into the SCA
domain. I think it needs to support SCA wiring, deployment and configuration.
I'd expect it other middleware that's not part of the SCA domain
to provide things like persistence, load balancing and failover.</scn3>
We may have a conceptual difference. I consider domain
infrastructure to include any middleware resources used by the SCA implementation.
This may include multiple runtimes, databases, messaging providers, JEE
app servers etc. An SCA implementation would not necessarily implement
persistence, ordered messaging, transaction recovery, etc. Rather, it could
use other software to provide those features. For example, an SCA implementation
that supports key generation using a database table may only contain a
DDL script and code that makes a JDBC call. Given that, I don't think I'm
putting anything into the domain.
The important thing is how conversation id generation happens does not
bleed into the programming model and is transparent to the application.
In other words, we not should require anything more complex than this when
the OrderClient is wired to an OrderService:
public class OrderClient ... {
@Reference
protected OrderService service;
public void doIt() {
service.orderApples(...);
service.orderPlums(...); // routed to the same target instance
}
}
<scn2>Another important thing is how much complexity is needed inside
the infrastructure to support the programming model. This cannot
always be hidden under the covers, especially when dealing with failure
cases and complex environments. The need for persistent transactional
storage of conversational instances is an example of where this complexity
arises.</scn2>
Here it would be useful to outline a specific use case for "persistent
transactional storage of conversational instances" as I'm not sure
what that entails. Does it mean the following
1. Invocations to orderApples() and orderPlums() are done with guaranteed
delivery
2. Changes to the *state* of a given OrderService instance are guaranteed
to be available in the case where the runtime hosting the instance fails
I suspect if it is the above, much of the complexity will be buried in
the messaging and failover infrastructure, not the SCA runtime.
<scn3>SCA would need to decide whether or not such guarantees are
part of the SCA conversational programming model. If they are, then
the SCA domain would have to step up to providing the mechanisms to implement
them. It's currently unclear from the SCA specs what assumptions,
if any, application code can make about these matters.</scn3>
I believe these should not be concerns of the programming
model. They should be expressed through policy. Also, the SCA implementation
would be free to delegate to some other middleware to provide these features.
<snip/>
How about the current API:
public class OrderClient ... {
@Context
protected ComponentContext context;
public void doIt() {
service.orderApples(...);
CallableReference<OrderService>
reference = context.cast(service);
//...
}
}
Or, we could change "cast(..)" to
public interface ComponentContext {
<T> CallableReference<T> getCallableReference(T proxy);
}
I believe this to be similar in spirit to working with message-based correlation
ids (e.g. JMS) where the forward message id is not available until after
the message has been enqueued.
<scn2>This doesn't work because a callback can occur before control
is returned back to the invoking thread of execution. So the client's
conversation correlator must be known before invoking any call that may
trigger a callback, in case the callback business code needs to do anything
that needs to use the conversational state. There's a similar issue
with "callback ID" (if we retain this concept), as a forward
call can invoke a callback which may need to use a previously generated
callback ID to identify the context in which the callback business code
should execute.</scn2>
This assumes the conversation id is generated asynchronously from the client.
That is not what I am proposing. The first invocation on a proxy would
not return control to the client until after a conversation id was generated.
The difference with what I am saying and what you are proposing is that
the synchronous act of generating the id:
1. Is not exposed to application code and does not place requirements on
the service contract
2. Does not necessarily require any out-of-process work
<scn3>The problem with this is that as soon as the invocation is
made on the service, a callback could arrive, and this could happen before
the client code that calls the proxy has received control back from the
SCA runtime and been able to process the generated ID.</scn3>
In the sequence I defined above, a callback can never
happen before the conversation id is generated since the latter is a synchronous
operation that completes before a service provider instance is created.
<snip/>
Does this help at all? It eliminates the use of
CallableReference in business service APIs, though not in client business
logic.</scn2>
It's a start but doesn't address my main concerns:
1. I don't think it will work for important messaging use cases
2. It places unnecessary restrictions on conversational service contracts,
namely a forward synchronous invocation
3. It will result in a lot of unnecessary application boilerplate code.
For example, all conversational calls will need to start with a synchronous
call to the provider, even if no application or business function is modeled.
4. Application logic is still unnecessarily tied to infrastructure
5. It's a lot more complex than the examples I gave or alternatives to
SCA
6. It's confusing, particularly this line:
OrderService myConversation = myService.startNewOrder(); // returns an
ID for the entire fruit order
For example, if I am an app developer, I will ask why do myService and
myConversation implement the same interface? What's the difference (I know
what it is but one can't tell by looking at the code)?
7. Requiring an extra invocation to start a conversation does not promote
coarse-granularity. It will result in an unnecessary and potentially costly
performance impact as an additional remote call is introduced.
<scn3>I'll observe that most or all of these focus around the proposed
additional forward synchronous call that returns a conversation ID. If
we could find a way to avoid the need for this, we might be close to agreement.</scn3>
Yes, that is mostly it although I also believe we do not
need CallableReference and we may have some differing opinions on the relationship
between callbacks and conversations. I believe Mike and my proposal solves
the issues brought forth as well as provides a way to avoid requiring a
forward synchronous call to return a conversation id to application logic.
If you don't agree that I have dealt with the issues raised previously,
could you let me know which specific cases. If I have, then would it be
acceptable to move to look at what you don't like about what Mike and I
have proposed and perhaps take that as a point for moving forward since
the proposal only subtracts from the current API and involves less change?
Jim
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number
741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
3AU
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
| [List Home]