Ron, you
say:
> A standardized
modeling vocabulary is very useful; a modeling vocabulary that omits key
concepts (for example, sub-processes) has far less value.
We have actually not
been talking about omitting the concept of sub-processes, but rather whether
this concept could be modeled with existing constructs (possibly with minor
modifications to existing syntax).
My tools argument
was not about the tools compensating for functionality not existing in the
language, but rather about making the issue of how this functionality is
actually expressed less relevant.
Your argument that
analysts are uncomfortable with a WS-based approach seems rather weak,
given the fact that WS concepts are very aggressively spreading throughout the
industry (so that even pointed headed managers are quite comfortable talking
about Web services today ;-). It would be like saying, 20 years ago, that
languages should not have used OO constructs just because programmers were only
comfortable with functional programming.
Ugo
Ugo Corda
wrote:
Ron, you say:
Composing processes at a services
level is unusual, and arguably runs counter to what analysts have come
to expect. Only a software technologist would find the approach "natural."
Edwin's original message seems to indicate otherwise.
I suspect this says more about Collaxa's customer base that process languages
in general. Collaxa is attracting the early adopters, who tend to be far more
technical than the average analyst we are discussing. One need only examine
existing languages aimed at business process automation and/or
modelling. The WfMC has done some interesting work over the years in
executable process languages.
Also, let's not forget our assumption that most likely business analysts will not get their hands dirty with the actual code, but rather use visual tools that will automatically generate the actual BPEL code.
We have already established that BPEL should be readable, and not the target
of sophisticated tools only. The whole discussion on the redundant nature of
the <sequence> activity was resolved with the assertion that BPEL is
meant to be directly read and written by users, even if redundant language
features are the result. Even so-called visual tools that isolate the user
from the XML syntax are apt to expose some (if not most) BPEL constructs
in a fairly direct fashion, at least in the near future. It sounds like we
have using contradictory assumptions.
I find the
argument that "the tools will handle the problems/complexities" of BPEL (or
any other vocabulary for that matter) to be a weak one. While there are
certainly cases where we will be forced to adopt this stance, it should not be
our first answer whenever we encounter difficulties in the language. I believe
Yaron characterized this approach as a "canard" during the last conference
call, and I must say I concur.
Can we please all
agree to not use the "tools will paper over the problems" argument in our
discussions, except as a last resort? Each time we dismiss a problem in BPEL
as a "matter for tools" we are creating a barrier to adoption. Good tools are
difficult (spelled expensive) to write; the more complexity we force on those
tools, the more expensive they will be. This creates a very direct barrier to
adoption. Alternatively, what value will BPEL have as a standardized
vocabulary if it must necessarily be hidden behind a lot of proprietary
approaches to address its shortcomings? It has been asserted by the authors
that BPEL is a process modelling language, as well as an execution language;
this assertion seems to have found general acceptance by the TC. A
standardized modelling vocabulary is very useful; a modelling vocabulary that
omits key concepts (for example, sub-processes) has far less value. A
vocabulary that makes key concepts "vendor-specific extensions" is even worse!
I repeat: can we please, please, agree to not
resort to the "tools will fix that" line of argument, except as a last resort?
Otherwise, what are we all doing
here?
-Ron
|