OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

wsbpel message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [wsbpel] Sub-functions: some thoughts


Edwin,

    Thank you once again for your thoughtful responses; they are very helpful in generating a shared understanding of the issues surrounding sub-functions/sub-processes.

    My comments are in-line with yours.

Edwin Khodabakchian wrote:
We should be driven by uses cases NOT analogies. In this case case, the use case is about re-use. As pointed out, there are multiple ways to address that need:
(1) Encapsulation of the logic to be re-used in a seperate process
(2) Let the tooling address this need through templates
(3) Create a new construct at the language level (called subfunction)
 
It seems that the only benefits of (3) over (1), (2) or a combination of (1) and (2) is that the use of subfunction signatures can having to avoid creating a new message type and the assign activities needs to initialize and read data from the message types.
    This is not an insignificant advantage. Crafting new message types, and writing "gnarly" assign activities are not simple tasks, and can be regarded as "overhead" getting in the way of the process author. Put another way, our domain-specific language requires a lot of solution domain elements; sub-functions could serve reduce the number of solution domain elements the author need worry about. 
[edwink] I think that good tools will hide this complexity. In both cases, you need a mapping between the variables defined in the process and the signature of the subfunction. This mapping can be either defined in the assign activity or inlined in the subfunction. So the benefit from the process author is not the mapping (because that part is not going away) it is only the type definition (XML schema part) and that can be absorbed by the tool. 
There might be also an argument about (3) offering better support for privacy and packaging and deployment.
 
But there are also drawbacks/costs with (3) that need to be weighted:
 
(1) Complexity: to really deliver on the benefit of sub-functions, we need to replicate a very large number of concept and constructs already existing in the process and scope elements.
I think your general point about increased complexity is true, but I'm not sure about the need to replicate much of what already exists. Yaron's proposal seemed a very straight forward extension of those concepts, not duplicative. Perhaps you were referring to implementations, rather than the language itself? 
 
[edwink] I am only talking about the complexity of the language, NOT the implementation.
(2) Lack of communication channel: we need to create a communication mechanism between the subfunction and the calling activity so that the subfunction can report its status. This last need is much better addressed with option (1) because already supported by scope event handlers and receive activities.
An interesting point. I regard the proposed sub-function invocation mechanism as a simple extension of that used by existing activity types, such as <invoke>. Variables are passed by reference, either as input, or output. The chief difference is cardinality -- subfunctions allow multiple input (and output?) variables. The general mechanism, from the user's perspective, is the same.

I'm not sure what you mean by a subfunction reporting its status. Are you envisioning some sort of asynchronous subfunction invocation? 
 
[edwink] The difference I am pointing to here is that the call to the subfunction will be a blocking call and the subfunction has no way to report its status to the caller (by status I mean: I am at subfunction.activity1, subfunction.activityN, I am about to finish, etc...On the other hand, it the re-used code fragment is instead modeled as a process, it can notify the caller and the caller can handle those notifications using eventHandlers, pick/onMessage or simply receive.
[rtenhove] Cannot a sub-function use invoke to send one-way notification messages to the instance? The fact that a sub-function call may block shouldn't affect the concurrent event handler. Or am I missing something here?
This is an area where BPEL is very different from traditional flow languages and this is why I am not sure that your "other process language do this" argument is valid.
I don't think BPEL is so different from existing languages, just simplified. The adoption of an abstract messaging model for work performers and requesters does eliminate (or move elsewhere) some of the complexities found in other languages. In some other languages it is theoretically possible to model other processes as work performers/requesters, but this, as far as I know, has never been carried to the extreme of requiring that it be done so. Regardless, I don't think BPEL is so different that the need for for a compositional mechanism supported by the language itself has been eliminated. 
 
[edwink] I do not agree with you on this. BPEL is both richer and different. Richer because it has much better support for exceptions, events and transactions. Different because it is about the coordination of message exchange patterns (NOT work management).
[rtenhove] BPEL does has some interesting features, as you touched on. I was referring to the simplified model of work items, as well as the restricted topology and expressiveness of a BPEL process graph. Also, BPEL, being based atop the WSDL messaging model, doesn't have to define much in terms of a data model, or interaction patterns -- this shrinks the language further. The exception support is interesting, but hardly unique. The compensation and transaction support are interesting, but seem insufficient by themselves. But looking at BPEL purely as a process language, and its ability to express actual business processes, with their complexities and interesting patterns, BPEL falls short of the mark set by other process languages; thus my comment about it not being as "rich" a language.

It is true that BPEL isn't directly about work management, rather it is about orchestrating message exchanges. However, that is not the entire picture. BPEL will be used to solve problems, many of which will involve things that look an aweful lot like work management and process automation. It is because of this that I hold that BPEL ought to be compared and contrasted with other process languages; if we don't do this, certainly potential users will.
(3) Lack of consistency:
This is a software point but I find the passing arguments to the subfunction not consistent with the rest of the spec: you need to create a new subfunction interface definition language (not WSDL), you need to learn that sometime you can do assign (when using invoke, receive, pick) for data manipulation and sometime you can plug expressions directly in the signature (this is at least what Yaron is suggesting).
Right. There would be a new concept, the subfunction, which is invoked in a syntactically different, seemingly more direct way than services. This distinguishes subfunctions from (external) services, but at the cost of the user knowing the difference. The actual "pass by reference" semantics would remain the same, keeping things consistent at that level. 
 
[edwink] This goes back to my point about complexity. I am not sure of what you mean by pass by reference.
I was referring to Yaron's proposed sub-function support for BPEL. Variables are passed by reference to sub-functions. This is the same sort of mechanism that is used in <invoke>, <receive>, etc: a reference, by name,  to the variable to read/written.
I believe that the single most important success factor for BPEL to succeed is to remain simple (which is not trivial given that we inherit the complexity of XML Schema and some of the exotic variations of WSDL). Given that there is a significant complexity "cost" associated with adding this feature to the language, I would like to recommend following Ugo's advice and finding use cases where the problem can not be addressed using a combination of (1) and (2) and therefore require us to go the extra mile of spec'ing out (3).
Unfortunately we have already established that we will not simply reduce BPEL to the simplest language possible. (I find this minimalist approach very attractive, but this has been rejected by the TC. BPEL is a modelling language as well as an executable one. This will increase language complexity / clutter.)

I still find Ugo's suggestion (that you refer to) as being decidedly biased, and it certainly misses the mark in this issue. As you mentioned earlier in your note, the question is about reuse: what language support for reuse should we have? This affects readability, composability, and (less directly) the granularity of reusable "components" that are feasible. None of these factors are examined when simply looking for use cases that cannot be addressed.
  
Your point about cost/benefit trade-off is well taken. Adding more domain-specific concepts to a language makes it more complex to implement, while making it easier for end-users to utilize. Conversely, reducing the number of domain-specific concepts makes it harder for end-users to use the language, but makes implementations  easier to realize. There is a balance to be achieved, not by always favouring minimal language features, but, as Satish reminded us, by using our judgement. How are we to judge the cost/benefit of language-based support for reuse?
 
[edwink] I am not sure that I am biased here. At the end of the day, we should be able to translate every aspect of the language (and the conceptual complexity it entails) to a requirement/use case/something the developer wants to do with solutions based on BPEL. And then make trade offs. The tradeoff I am suggesting we make here is to promote (1) and (2) as the best practice for reusing process logic instead of introducing the notion of subfunction.
I was referring to Ugo's request that we seek to prove that the WS approach to modelling subprocesses couldn't handle all use cases. I regard this approach as biased, since it puts an unreasonable burden of proof on one side of the discussion, but not the other. In contrast, in this particular email thread we are having a useful discussion about different approaches to to reuse in process models, where, as you suggest, we have to weigh the costs and benefits of different approaches.

I see, both from your reaction above and your reasoned arguments throughout that you do not advocate "biased" approaches to discussion. I apologize for suggesting otherwise.

Perhaps we can explore all approaches, (1), (2) and (3) in further depth. Regarding (1) and (2), I am curious about possible support for those tools we keep leaning on; is there anything that we can do to help aid portability or composability of composite processes, such that we can help avoid vendor lock-in or preserve key design information that would otherwise be locked into vendor extensions? Can we do more in this area than make simple suggestions / workarounds? Can we find an appropriate point of leverage for good cost/benefit?

Cheers,
-Ron


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]