OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

wsbpel-uc message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Call-in information for Use Case Conf Call: 7/21 @ 1 PM PT / 4 PM ET

Hello all,


The call-in information is for the Use Case conference call is:


Toll-Free (US and Canada): +1 866 500 6738

Toll: +1 203 480 8000

Passcode: 600345


Please contact me if you have any questions or concerns.  I will send out an agenda update tomorrow morning.




John Evdemon




From: John Evdemon
Sent: Friday, July 18, 2003 3:30 AM
To: wsbpel-uc@lists.oasis-open.org
Subject: Proposed Use Case Conf Call: 7/21 @ 1 PM PT / 4 PM ET


Hello all,


Please accept my apologies for the delayed follow-up.  I am on the road this week with limited email access.  


I would like to arrange a Use Case Conf Call for next Monday at 1 PM PT (4 PM ET).  Please let me know if this time will be feasible.


A draft agenda appears below (I will finalize this agenda later tomorrow):


1)   Review of current status

2)   Review of Use Case candidates

a.   I have compiled a document (attached) summarizing selected posts from the mailing list.  Many of these posts could be considered for one or more Use Cases

b.   Simpl EB

c.   CPFR (One of the nine steps)

d.   Other candidates?

3)   Vote on which Use Case candidates will be presented to the TC at our next conf call

4)   Assign “owners” to each Use Case

5)   Next steps


Again, this is a draft agenda that will be finalized tomorrow (I will also send out the call-in information). 

I will send out some “reading assignments” for people that may be unfamiliar with Simpl EB or CPFR.


Please let me know if 7/21 at 1 PM PT/4 PM ET will work for you.  


Thanks for your time – sorry again about the delay in getting this conf call scheduled.




John Evdemon



From: Sally St. Amand [mailto:sallystamand@yahoo.com]
Sent: Thursday, July 10, 2003 10:21 AM
To: Sazi Temel
Cc: wsbpel-uc@lists.oasis-open.org
Subject: Re: [wsbpel-uc] When is the next call?



There has not been a conference call since 6/23. On yesterdays general conference call JohnE said he was going to schedule a use case conf call late this week or early next week. Since you were gone I put out a doc Voting on Use Cases that is a proposal we need to talk about and Ben Block put out some info and proposals relating to the template. Check the website for this stuff.

Personally I can do a conf call next Mon (7/14), not Tues, and Wed is a maybe. If we all let John know our availability it might be helpful to get something set up.

Hope your vacation was fun.




The "multi-start activities" example is one kind of use case, where rendezvous occurs at activation hence before the
creation of any instance ID.  Consider also multiway conversations, especially those where A-->B-->C-->A  types of
communication loops occur.  What these need are mechanisms for carrying context and "infecting" the right instances with
the context -- WS-Coordination has this as a general mechanism.  The scope of such a coordination does not in general
coincide with the lifetime of a process instance.  You can think of correlation sets as a "poor man's context" mechanism.
But I agree with the goal of reducing the scope of explicit correlation if possible, but that would mean dependency on
some specific context mechanisms for two-way conversations and multi-way coordination.  I would hate to see us create
some sort of meta model for context that has to be mapped to concr ete coordination layers.  Too much complexity.

I believe a <sequence> activity can be replaced with a <flow suppressJoinFailure="false"> activity with a few more
caveats, to produce equivalent control sequencing behavior.

Assaf Arkin:
What if a <sequence> was properly rewritten as a <flow>? My question again, is there a proper way to rewrite a sequence
as a flow so you preserve the semantics of the execution but only use the flow activity? If so, what are the pros/cons of
doing that?

Bernd (responding to Satish):
>>I honestly don't think <sequence> needs to justify its existence.
>>Concurrency with synchronization can emulate sequentiality but that is 
>>clearly a convoluted and expensive way to do the simplest kind of 
This may be true from the standpoint of writing bpel by hand, but for sure it is a non issue for implementation.
Depending on your internal runtime data model, a sequence is only an additional complication, provided the fact, that you
need to offer a implementation for flow, anyway. And since a sequence does not forbid to have links in and out, it also
means your engine has to support the notion of synchronisation, anyway.

So we should make clear in the spec, that it is only a shortcut, for skipping those links inside a sequential flow, but
all other properties will apply, anyway.

Ron Ten-Hove:
    If BPEL were to be strictly a modelling language, then the inclusion of both <sequence> and <flow> would be
    justifiable from the modelling perspective. Perhaps we can make this claim for abstract BPEL?

    On the other hand, "concrete" BPEL has a different purpose: it is an executable description of a process. It may be
    appropriate for the TC to create a formal semantics for the execution of each BPEL element, in order that we may
    analyse and better understand what we are creating. Even if we don't go as far as formalising the semantics, it is
    true that the simpler the language, the easier it is to reason about. From this perspective, it is entirely
    appropriate for Arkin to ask that the <sequence> activity justify its existence, when equivalent structures exist

    So what is our rational moving forward? Is this (primarily) a modelling language, or an executable artifact? Or are
    there better criteria for resolving these issues?

Ron Ten-Hove:
this brings us back to the "how do we author BPEL documents?" question again. I would submit that if the process author
has no more sophisticated an authoring tool than vi or emacs, wizardry will be required regardless. Managing information
from assorted WSDL documents, complying to one or more abstract BPEL processes, and sorting through a relatively complex
syntax (XML serialization being what it is), we have already restricted the pool of authors to a relatively small one
(with all the right skill sets). Even hello world, aided by syntactic sweeteners, becomes a headache.

On the other hand, if the process author is aided by auitably helpful tools, then the amount of detail he/she need
manage can be reduced. If they are faced with authoring process graphs, suitably annotated, and BPEL exists safely
behind the scence, then the BPEL language can be optimized to improve important aspects of a process language, rather
than raw-text authoring.

A high-level process graph is typically not directly executed, but rather converted / compiled to be executed atop a
purpose-built infrastructure. Human-friendly structures are converted into ones amenable to execution by machine.

The question is -- where does BPEL fit in this? Is it a high-level representation, suitable for modelling processes in a
fashion that a business analyst (or other such domain expert) want? Or is it an intermediate form, with well understood
execution semantics? Or is it a little of both? If the last choice is correct, then what principles will be used to
choose between the dual poles of representing high-level domain concepts, and simple executable semantics?

Assaf Arkin:
Hello world does require a 2-star wizard. You need to define your message using XML Schema, strap a WSDL operation onto
that, add the protocol bindings so you can define an endpoint. When you link that to the BPEL make sure to use the
proper references to the proper definitions disperesed throughout these documents, or it will not validate. And don't
forget the deployment descriptor -- you won't be able to send/receive messages without it.

We can make BPEL as easy as HTML, you still won't be able to deploy it without tooling support. So if you look at the
whole mix of specifications you have to deal with just to say "hello world", it really makes no difference. For the
tools both options are the same, and for the vi user both options are impossible.

You need to build support for serializing activities that synchronize through links. It's not an easy task, but you'll
have to do it anyway. If you already spent the time making flows work properly, might as well use that piece of code in
all cases. If you write it once and use it all over the place then there's a good ROI argument in favor of using <flow>
as much as possible. Less code to develop, less test cases, etc. At least from an execution perspective.

Jim Webber:
After having hand written BPEL scripts, and intending to do so in future (because sometimes you just have to do things
for yourself), I would plead the case for keeping useful bits of syntactic sugar like <flow>. For those of us who will
be writing BPEL by hand (and I might be a minority of one here) other constructs like <macro> or <procedure> would be
very welcome too.

The point is, although I can see that tools are really useful in this arena, there will always be cases where tools
aren't applicable. Given that BPEL hasn't been used much so far, it is premature to start optimising away features that
I (for one) might rely on!

BPEL is an abstract virtual machine, it should have the minimum constructs necessary to express the universe of
supported programs; if there is no "if-then-else" because "switch" has all the expressive power (and then some) of "if-
then-else",  then there should not be a "sequenence" by the same reasoning.

if a sequence has incoming and outgoing links, and is now replaced with a flow with links itself, I submit that
recovering the original pattern and its simple inexpensive implementation would be challenging.

In other words, we would be raising the bar for implementers of both tools and runtime engines, especially if we expect
them to cater to the prejudices of those who prefer the "block structured" approach rather than the "linking activities"
approach that Assaf seems to like these days.

I would like to propose that we remove the concept of "compensating a process as a whole".

From 6.4: 
  If a compensation handler is specified for the business
  process as a whole (see Compensation Handlers), a business
  process instance can be compensated after normal completion by
  platform-specific means. This functionality is enabled by
  setting the enableInstanceCompensation attribute of the
  process to "yes".

Rationale 1: "compensating a process as a whole" introduces uncertainties into the specification. These uncertainties leaves the feature hardly usable in my opinion.

  Uncertainty 1: how to request compensating a completed process
  instance is undefined.

  Uncertainty 2: how long a process instance remains
  "compensatable" is undefined. It should be natural to expect
  that after some time the data of a completed process is
  removed from the system (to save disk space, etc). Thereafter
  compensation cannot be performed.

Rationale 2: There already is an easier and clearer way of achieving the same effect without incurring the uncertainties
described in Rationale 1. By cofiguring a process as follows, you can clearly express:
  * how to initiate compensation of the process "body", and
  * the deadline of initiating compensation after the completion
    of the process "body".

      if a certain message is received, compensate.
       ... compensation handler ...
       ... the "body" of the process ...
      <wait ... until some desired time ... />

* Clearer and more flexible semantics.
* An execution engine can immediately remove data for a
  completed process instance.

Assaf Arkin:
The event handler is enabled for the lifetime of the process before you reach a point where compensation is possible.
However, you can replace the wait with a pick that either waits for a compensation request (received only once) or gives
up after some time-out. That's consistent with your suggestion for writing a compensation handler using the available
set of constructs.

But there two issues that would arise as per the current specification:

1. Currently the specification does not allow you to perform compensation from outside a compensation handler, so you
can't invoke compensation on enclosed scopes. Clearly you would want to do that from the compensating activity that
follows the base activity.

Yuzo rewrote the example as: I've rewritten the example as follows. It should work now.

      catch compensationRequested fault and compensate.
       ... compensation handler ...
       ... the "body" of the process ...
        <onMessage...for compensation request...>
          <throw faultName="compensationRequested"/>

** this needs tested **

2. For management purposes there's a value in determining that the process has reach the 'completed' case after which it
can only be compensated or discarded. Structuring activities in this manner would work, but would not give a management
tool visibility into the process state.

To solve the problem, we may need to formalize the concept of "body" of the process such that a process instance can be
in state  "process as a whole=running, body=completed". I don't like this idea very much, but nonetheless present it
here to hopefully promote the discussion.

I just wanted to contribute some experiences from implementing the spec. And in my experience it was additional work to
support the sequence activity. It was not much, so we could easyly keep that construct, but in that case I would vote
for a rationale in the spec, describing, that sequence has no sematic differences too a sequentially linked flow, just
to asure everybody who is reading the spec did understand it correctly and is not left wondering what he has missed.

David RR Webber:
Message text written by "Yuzo Fujishima"
> Rationale 1: "compensating a process as a whole" introduces uncertainties into the specification. These uncertainties
> leaves the feature hardly usable in my opinion.

Also - without clear use cases to back up the need for this, it will just be an opened source of continued frustration
into the future.

I also see - once we have the use case work done, and once we have the liaisons in place - we can suggest some other
means to solve the extended picture - especially when the use case need is to coordinate across the architecture to
achieve the desired functionality.

Other teams have done this with Technical Notes and Adjuncts to the main spec' - and it works very well.

Means you can focus on the core aspects and get those really done well - and not sweat trying to have the whole kitchen
sink in there in the first release.

Consider also the BA protocol in WS-Transaction which shows how the compensation of a subordinate instance would be
invoked by a controlling instance, if we were to specify such a subordinate-controller relationship in BPEL, effectively
"remoting" a scope as a separate instance (without implicit state sharing).  The BA protocol messages could be handled
by implicit event handlers that maintain the BA protocol semantics.

I am not saying yet that we should actually do this, just throwing out some interesting possibilities to think about.

It would be very useful to have a session focused on compensation, scope state and WS-C/WS-T. We talked about that at
the F2F and in the following email exchanges.

I suggest that we take one of the examples used by Satish in the F2F (travel procurement) and try to get down into the
details of how exception management and compensation management would be implemented. This will help us flush out the
details of how BPEL and WS-Transaction (BA) work together.

If we decide that this is valuable, I volunteer to implement the BPEL processes once the use case has been agreed on.

Assaaf Arkin:
Suppose you have some process A that performs some operation and supports compensation through its process-level
compensation handler. (The topic of discussion is process-level compensation handlers, not scope-level compensation
handlers, which are covered in the appendix)

You have some process B that performs an activity called X. Activity X invokes process A and so process B may later on
decide to invoke A's compensation handler. Assume you would want to use WS-TX to do that. Interoperability means that
two systems would understand how it works and do it in the same manner (barring any other differences).

What would the compensation handler for activity X look like?

Let's say A is never compensated unless there is a compensation handler for activity X. The compensation handler for
activity X does not need to do anything, so it contains an <empty> activity. According to the current spec, if the
compensation handler includes an <empty> activity then nothing would happen. So there needs to be a clarification that
some work would indeed happen and that this compensation handler may actually throw a fault (if A returns 'faulted').

Another implementation may decide that to compensate A you need to invoke the default compensation handler for X. So if
X has a compensation handler containing <empty> it would not invoke A's compensation handler, but if it has no
compensation handler then A gets compensated using WS-TX. You can use the <empty> activity to prevent A from being
compensated (a good thing), but you can't interact with A within X's compensation handler, which violates the need to
pass data to a compensation handler.

Another possibility is for X's compensation handler to explicitly send a compensation message to A. Unfortunately, X
doesn't know the participant or coordination addresses and so can't pass them to A's WS-TX implementation.

Considering the lack of clarification in the specification I am very curious to see what such an example would look

Edwin Response:

I think that we all agree that although this might be clear in a few people's mind, there is room for interpretation and
for the sake of interoperability the spec needs further clarification on that specific subject.

I am suggesting that we start by defining a simple example as a way to identifying the areas that need further
clarification within the spec.

I think what we lack here is coherence. If we are not striving for a minimal feature set than there are others features
we should entertain that would simplify some definitions especially from the perspective of modeling and/or XML
authoring. If we are looking for a minimal feature set what are our criteria? Is that strictly to address execution or
do we need to look at a larger scope?

The abstract as I understand it, is used to define the business protocol. I'm not sure what other purposes it serves,
the only one I can think of is to constrain the process definition. Which means you need to check for interface
compliance, again in terms of implementation a <sequence> doesn't give you much unless we have a profile in which <flow>
is not supported.

As for modifying by humans, again we have to look at how often/how much. If it's frequent then you add a requirement to
support XML authoring and add more constructs to the syntax to make life easier. If it's less frequent, then you try to
be as easy to use as XML Schema, WSDL and other related specifications.

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]