OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

tosca message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [tosca] Operation implementations


Hi Adam,

 

Iâm not sure I completely follow your discussion below. Would you mind taking us through this in more detail during Tuesdayâs meeting?

 

Thanks,

 

Chris

 

 

From: adam souzis <adam@souzis.com>
Sent: Tuesday, October 5, 2021 6:14 AM
To: Tal Liron <tliron@redhat.com>
Cc: Chris Lauwers <lauwers@ubicity.com>; Calin Curescu <calin.curescu@ericsson.com>; tosca@lists.oasis-open.org
Subject: Re: [tosca] Operation implementations

 

I think it would be conceptual simpler to introduce a new type like implementation or operations types rather than overloaded the meaning of data types or artifact types. TOSCA already has a philosophy of have many fundamental types so defining a new one seems consistent with the rest of the spec. 

 

Also, I thought I'd share an example of when I found it awkward to not be able to define a reusable operation or implementation type: This is my implementation for representing and deploy helm charts and releases:

 

 

I need to call the helm command line for many different operations and it is tedious and error prone to have to copy and pass that the shared logic around building up the arguments. So I defined an interface type called Helm and have a hack of an "artifact" called "Delegate" so I can do things like this:

 

        operations:

  check:

      implementation: Delegate

         inputs:

           operation: Helm.execute

           helmcmd: status

           dryrun: "--dry-run"

 

Workflows support call-operation so I could have used that instead but that has its own set of issues, mostly because they can't easily be specified as operation or interface implementations. 

 

Another thing that might be of interest this example is that because artifacts can't yet be specified globally, I have another hack where node templates of type "unfurl.nodes.LocalRepository" indicate that its artifacts are available locally for the orchestrator to use as an implementation.

 

helm-artifacts:
     type: unfurl.nodes.LocalRepository
   artifacts:
      helm:
         type: artifact.AsdfTool
         file: helm
         properties:
              version: 3.6.3

 

As final extension, I allow artifacts type definitions to have the regular 1.3 lifecycle interface and use that to install artifacts as needed. In this case artifacts of type "artifact.AsdfTool" has an implementation that uses "asdf" to install the artifact, helm in this case. You can see the implementation here:

 

So when unfurl sees that the artifact it needs for an implementation is missing it just adds it to the deployment plan like any other requirement. 

 

Talk soon,

Adam

 

On Mon, Oct 4, 2021 at 9:44 AM Tal Liron <tliron@redhat.com> wrote:

On Sun, Oct 3, 2021 at 4:59 PM Chris Lauwers <lauwers@ubicity.com> wrote:

I believe there are actually two separate issues here:

  1. Do operation implementations need to be âsplit upâ into a âprimaryâ implementation and a list of âdependentâ implementations?
  2. Do operation implementations need to be typed, and if so, how to we specify the âtypeâ of the operation implementation

As for #1, as I noted it is an implenetation-type-dependent issue to support "dependent" files, not something that I think should part of the grammar.

 

1.       In your example, you define (âdeclareâ) the type of the operation implementation in the interface type definition as follows:

interface_types:

  Maintenance:

    operations:

      maintenance-off:

        type: SSH # data type for the implementation. if not declared, must be set in assignment

        inputs:

          immediate:

            type: bool

I believe this is not good practice. An interface type definition should be just that: a definition of the set of operations that can be called within the context of that interface. Interface types should not have an opinion about how they should be implemented. Specifically, they should not restrict the type of their implementations.

 

I suggest having it as a type for two reasons:

 

1. To be used more as a default, so that you can have the type already there. It would be an optional keyword.

2. It is necessary to have this if you are generating interfaces from an existing system. This is the "second approach" I mentioned. For example, if you are generating an interface based on a gRPC proto file then, well, all operations are gRPC. It makes no sense to have them be SSH or something else.

 

3.       More importantly, your approach for handling the properties of the operation implementation data types is invalid TOSCA, in my opinion, and will not work. Here is your âRemoteâ data type definition:

data_types:

  Remote:

    properties:

      address:

        type: string

        default: { get_attribute: [ SELF, address ] }

      credentials:

        type: Credentials

        default: { get_input: credentials }

The âaddressâ property has a default value that references SELF. However, data type definitions do not have a SELF context.

 

This is indeed controversial, and worth discussing separately from this whole topic. The advantage of allowing this usage is increasing reusability of various types.

 

And, I'll point out, for all attribute values the validation would have to be done at "use time" anyway. This includes notifications which will update values and then the processor would have to validate constraints at runtime. So, all I am implying here is that this can be generalized for all values.

 

Also, by the way, the SOURCE and TARGET contexts might also not known be known at design time. This would be especially true for what you call "dangling requirements".

 

Your gRPC example illustrates this use case. By the way, I believe your example shows grammar that is slightly different from what you intended. If my understanding is correct, the following snippet is grammatically more correct (i.e., the type is specified inside the âimplementationâ block, not the other way around, and the âpropertiesâ keyword is required):

topology_template:

  node_templates:

    server:

      type: Server

      interfaces:

        Maintenance:

          operations:

            maintenance-on:

              implementation:

                type: GRPC

                properties:

                  rpc: StartMaintenanceWithMode

                  timeout: 10 s

              inputs:

                mode: production

 

So, this is more of syntactical quibble but your example looks very verbose to me. It also assumes that it is not a data type, so there will always be a "properties" keyword. My suggested syntax actually works more like other parameters in that we have a separate "type" and "value" keyword. Also, my syntax supports having an implementation type that is just, for example, a plain string, in case you don't need the added complexity of a complex type.

Summary

As you pointed out, there are more issues that need to be resolved, but hopefully we can come to agreement on some of the basics. Here is my recommendation:

  • Introduce a new Implementation Type that must be used when defining operation implementations (instead of the current Artifact Types)

I'm not opposed to this, and indeed it was my initial proposal. I tried using data types instead just to see if it can work and simplify things for us.

 

A special implementation type might also solve the problem of SELF mentioned earlier: we can allow SELF to work within implementation types, in which case we define it as the node template in which the implementation is used.

á



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]