OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

tosca message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [tosca] Event Interface Proposal


More inline â

/C

 

From: Chris Lauwers <lauwers@ubicity.com>
Date: Thursday, 4 October 2018 at 23:57
To: Calin Curescu <calin.curescu@ericsson.com>, Priya T G <priya.g@netcracker.com>
Cc: "tosca@lists.oasis-open.org" <tosca@lists.oasis-open.org>
Subject: RE: [tosca] Event Interface Proposal

 

Thanks. More comments in-line.

 

From: Calin Curescu [mailto:calin.curescu@ericsson.com]
Sent: Thursday, October 04, 2018 7:16 AM
To: Chris Lauwers <lauwers@ubicity.com>; Priya T G <priya.g@netcracker.com>
Cc: tosca@lists.oasis-open.org
Subject: Re: [tosca] Event Interface Proposal

 

Hi Chris,

 

Please find my answers inline.

 

BR,

/C

 

From: Chris Lauwers <lauwers@ubicity.com>
Date: Thursday, 4 October 2018 at 06:41
To: Calin Curescu <calin.curescu@ericsson.com>, Priya T G <priya.g@netcracker.com>
Cc: "tosca@lists.oasis-open.org" <tosca@lists.oasis-open.org>
Subject: RE: [tosca] Event Interface Proposal

 

Hi Calin,

 

Thanks for putting this together. This is excellent work. Adding support for asynchronous notifications will increase the usefulness of TOSCA tremendously, and it will allow us to integrate policies and workflows much more cleanly into the rest of the orchestration logic.

 

I have one main comment on your proposal, and a couple of smaller observations about specifics of the syntax.

 

-          My main comment has to do with the use of âevent typesâ as part of a ânotification definitionâ. I had assumed that the ânotificationâ itself would define the event, and that the name of the notification would be used directly in the âevent_typeâ section of a policy trigger. What is the motivation for introducing a separate event_type, rather than re-using the notification name itself? Are you introducing event types strictly to satisfy the (already existing) policy grammar? If so, I would prefer to not introduce any unnecessary concepts and instead just re-use the notification name itself.

 

Yes, I tried to connect it to the existing policy grammar. Nevertheless, there are some objective reasons:

-          There might be other events in the orchestrator that are not related to notifications, so âother event typesâ are also needed (e.g. AttributeChanged or something similar)

-          We could use several notifications to generate the same event (for ease of use). That is, I can add an ulterior design of a notification interface to an already existing trigger/policy, that triggers on a well-known event type, (without needing to add this particular notification to the event_type list of the trigger).

-          We could trigger more than one type of event (useful for defining triggers on more general / more specific things).

 

This said, I have no objection for the notification to define intrinsically an event. We can add the following to the notification description: Whenever a notification arrives, it will always generate an event of a âtypeâ equal to the name of the notification (i.e. org.ego.event.interfaces.Upgrade.completed).

-          We need to decide if we want to keep the extra event generation list in the extended description of the notification definition. Would you like to get rid of that?

 

The main advantage of re-using the notification name is that notifications are defined explicitly as part of defining interface types. There isnât an equivalent mechanism for defining event types a-priori. Basing trigger definitions on explicitly defined notification names seems safer to me that basing them on event type names that can be randomly introduced in service templates.

 

Iâm also not sure about what other event types you envision. The âAttributeChangedâ event is exactly what notifications are supposed to accomplish. For example, you could define a monitoring interface on a node that calls a notification every time a monitored attribute changes. The notification would result in the attribute value being updated in the instance model, and in policies to be triggered as appropriate.

 

For instance I would like to monitor an attribute value and generate an event when itâs larger than 20. How would that event specification look like?

 

Anyway, I agree that notifications can be directly seen as a type of event, and used by name in its place. I will change the specification to reflect that.

 

-          A couple of other comments:

a.       It would be helpful to give examples of ânotification implementationsâ using artifacts. I assume that ânotificationâ grammar will just re-use the existing âoperation implementationâ grammar, but we need to make sure weâre not missing anything.

 

See answer to Priya in a the mail I just sent before.

 

b.       Your examples show a difference between âoutput definitionsâ and âoutput assignmentsâ. However, the operation output grammar we decided on does not make such a distinction. Instead, it only uses âoutput assignmentsâ (or âattribute_mappingsâ to be correct), both in node templates and node types, as shown in Section 3.16.17. (However, I just noticed that section 3.16.17.2.3 in the latest 1.3 draft doesnât include operation output support in node templates, even though section 2.15 shows examples of this. Iâll fix this in a future draft).

 

I modeled the output definitions in the node type, in a similar way to input definitions (section 3.6.17.1 Keynames - where input definitions are for the node type, while input assignments are for the node template). I wanted to keep the symmetry between inputs and outputs.

 

Yes, thatâs where I started as well, but I realized that this introduces unnecessary complexity, since in 99% of the cases, the output value will need to be reflected into the same attribute, independent of the implementation of the operation. If thatâs the case, the type designer should specify the attribute mapping rather than expecting the template designer to do so.

 

Now I realized that if we define the outputs mapping directly in the node type (as you mention above), then we donât need to use a âdefinitionâ to set a datatype to the ouptput (since it maps to an attribute of a certain type). Nevertheless, if we want to use the mapping in the template, then we should have some definition of the output datatype in the node type (which is an information that the node type creator has, but which the node template creator may no longer possess).

 

We should talk about this more. Assuming we allow attribute mappings in a node template, we could have the following cases:

 

First, letâs consider the case where the template defines an output that is already defined in the type. We could have the following options:

 

  • The mapping in the template will override the mapping defined in the type. I.e. instead of storing the operation output in the attribute defined in the type, the operation output will be stored in the attribute defined in the template.
  • The mapping in the template will augment the mapping defined in the type. I.e. in addition to storing the operation output in the attribute defined in the type, the operation output will also be stored in the attribute defined in the template.

 

Then there is also the question of whether it should be possible to define ânewâ output-to-attribute mappings in templates that were not previously defined in the type. Should this be allowed?

 

I agree. We should keep the possibility to have the outputs of the operation mapped in the type and remapped in the template (if a new implementation is used, i.e. the famous example). In the end we do not change any attribute types, and assigning new values was always possible. And a template designer can look at how the type was defined, know what values the outputs were, and map similar ones from the new implementation.

So all is fine as it was defined. I take back all change proposal.

 

Finally, again in the name of symmetry, is it possible today to use default: {get_property: [SELF, some_property]} as a default value when defining inputs in the node type?

If not, then we should also allow it.

 

This is already supported, although not using âdefaultâ. An input definition is actually a âparameter_definitionâ rather than a âproperty_definitionâ. Parameter definitions are almost exactly like property definitions with the following exception: in a parameter definition, the type is optional. When no type is specified, the parameter definition defines the fixed value to be assigned to the parameter (which in most cases will be an intrinsic function as in your example).

 

I am not talking about operation inputs here, but plain attribute default value setting in the type, where we would like to refer to another attribute of SELF, or from SOURCE or TARGET (if a relationship). The relationship case is more interesting.

 

a.       You include an example of a âcallbackâ that provides a node and a callback notification as inputs to an operation. Wouldnât it be cleaner to support this use case by defining a new âasynchronousâ lifecycle management interface where a separate âon_successâ notification is defined for each operation. Using the asynchronous interface, operations would return immediately (indicating that the operation has been started), and then the âon_successâ for that operation is called as soon as the operation completes. The âcallbackâ would then be implemented using the âpolicy triggerâ support that is already part of the language.

 

I donât think it would be good to connect the on_success with asynchronous behavior (i.e. when it returns). As it is now, in the synch case, it still returns, even if it has failed. Also, can fail even if it returns asynchronously, we just know that when it returned.

 

Iâm not sure I understand your statement about âconnecting on_success with asynchronous behaviorâ. Using âon_***â as a name implies asynchronous behavior. Could you clarify?

 

I donât see it as asynchronous. In todayâs workflows we wait (synchronously) after a call_operation. If it does not return in time or returns with an error then the on_failure path is taken. At least this is how I understand it.

 

Now, we could use a non-mandatory keyname âasynchronousâ in the operation definition,  that we set to yes (default is no) when defining an asynchronous operation. That would imply that the operation returns immediately, and that a notification of the same name is created and will be called asynchronously from the outside (i.e. the return will behave like a notification). The outputs will be mapped after the associated notification returns:

o   Advantage: we donât need to give an output notification, we donât need to specify same artifact two times (if implementation artifacts are used for notification see point a. above).

o   Disadvantage: There will be a âhiddenâ notification defined in the operations section, that is not âthat visibleâ.

All in all I think the advantage > disadvantage. But I would prefer the asynchronous keyword instead the on_success. Then we can define synchronous operations, asynchronous operations, and notifications in the same interface.

 

I like this approach, although technically the âasynchronousâ keyname should be associated with operation implementations rather than with operation definitions.

 

I believe there is a real value to let the workflow continue after an asynchronous operation is invoked (not get stuck until the response comes), and then another workflow could be triggered after when the operation returns.

 

These are two different ways of writing the workflows, and we should know that when we look at the operation definition in the node type, regardless if somebody else would change the implementation in the node template.

 

Here I am aiming more long term, when declarative workflows can be written and reused in different templates.

 

b.       I notice youâre reading the âpolicy triggerâ grammar in the spec differently than I did: in your examples triggers are named. I didnât think the spec included trigger names. Of course, since there are no policy examples in the spec and the grammar itself is ambiguous, it is hard to know which is correct. Iâm currently working on a project that requires TOSCA policies, and I must admit I find the whole policy section confusing. I would love a separate discussion on policies after we talk about notifications.

 

Named triggers are defined in the trigger definition (see section 3.6.20.3.1 / 3.6.20.3.2). Actually I have a small error in the policy definition since the triggers are not a list but a map.

 

I guess itâs actually the policy definition (Section 3.8.6) that defines triggers as a âmapâ, which implies trigger names. Iâd like to revisit this at some point, for two main reasons:

 

  • The trigger name is not used anywhere. Iâd like to avoid forcing users to name things when the name doesnât matter.

 

Sure. I agree.

I guess it was defined like this as a map since there was no âsequentialityâ (associated with lists). But a list does not mean that there should be sequentiality.

 

  • Iâm confused about why policies should include multiple triggers in the first place. I can see how multiple events can trigger the same policy, but a trigger is not an event. Instead, a trigger is an âevent/condition/actionâ tuple, which is actually the definition of what an (imperative) policy is. In my opinion, the concept of a âtriggerâ in the TOSCA policy grammar is extremely confusing. There clearly is a need to define policies, and policies should define events, conditions, and actions. I donât see how the concept of a trigger fits in, let alone multiple triggers in the same policy.

 

I guess itâs just for aggregation, you âapplyâ a policy or not, i.e. all triggers or none.

 

c.       On a related note, there are a lot of similarities between âpolicy triggersâ and âworkflow stepsâ, especially in the area of defining (pre)conditions. At the same time, there are enough syntax differences to make it difficult to keep the concepts straight. I think there is an opportunity to clean things up here.

 

How I understand it: the workflow steps are evaluated as specified in the workflow, and that is the time when the preconditions in the workflow step are evaluated. If they correspond, the actions are executed, if not they are skipped. What I donât understand is if the on_success is triggered in the skipping case. I guess not. Which means that no subsequent steps are evaluated (neither on_success nor on_failure).

 

Yes, this is a big problem. It means that workflows can âdead-endâ. Iâm sure that was not the intent. More likely, what was intended was the following: wait until the conditions become true, and then continue with the workflow step. However, this type of behavior can really only be implemented cleanly using asynchronous behavior, which means it should be connected to our new notification functionality. Thatâs why I think there is an opportunity to harmonize workflow steps and policies.

 

I think it was supposed to dead-end (i.e. as an assert). Anyway, if we want to wait we should define an explicit wait action that can wait until an event comes or condition is fulfilled.  

 

Thanks again for taking the initiative to write this contribution. Letâs discuss next week so we can make this an important part of the 1.3 specification.

 

                Thanks for the insightful comments.

 

Chris

 



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]