OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

tosca message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [tosca] discussion topic for Tuesday's Simple Profile call


Hi Chris,

 

I absolutely think that we need indeed a way to specify at the operation level if it should be executed on the node’s host (should be default) or on the management machine (default when the node has no compute host?). And I fully agree that there are many scenarios where it is required.

From security perspective I guess it could be the orchestrator responsibility to sandbox through containers or whatever way the executed code.

From a workflow generation point of view I absolutely think that there will be mix match between orchestrator operated nodes and compute operated nodes but this is not a more complicated challenge to handle than synchronization between 2 different compute nodes that the orchestrator would have to manage so I don’t think it is an issue.

 

Regarding single operation nodes, my main concern actually is that in the current TOSCA model nodes that are not implemented are considered as abstract and must be replaced by the orchestrator with a concrete implementation. In such topologies (implemented by an artifact) this consideration goes away and it should be specified somehow. Also we have to make sure that we have a way to specify the state of every nodes as some images may contains nodes that are installed but not started.

Finally, it is important to note that the usual TOSCA workflow weaving does not apply to these special topology included nodes and connecting relationships that defines pre_configure / post_configure operations is considered as an error (unless the node is left on state created ?).

 

I just want to make sure that all the workflow constraints of using such nodes and interactions with other nodes through relationships is well specified (limitations in supported operations, states of nodes etc.).

I think also that we need a way for people to specify or override the internal nodes that could not have operations specified with some that defines management operations (like start for a node that is created by the VM but not started).

 

I hope I will be able to join the call today but I have a sore throat that makes it hard to speak and I’m not sure I will make it.

 

Luc

 

From: <tosca@lists.oasis-open.org> on behalf of Chris Lauwers <lauwers@ubicity.com>
Date: Tuesday, 8 November 2016 at 05:13
To: "tosca@lists.oasis-open.org" <tosca@lists.oasis-open.org>
Subject: [tosca] discussion topic for Tuesday's Simple Profile call

 

During our last Simple Profile call 2 weeks ago, we discussed an idea I proposed to allow having artifacts be associated with entire topologies (whereas before we limited artifacts only to individual nodes). The use case that motivated this idea is the scenario where a VM or Container image contains an entire “virtual appliance” that might consist of a number of software components bundled together with the Operating System on which these software components run. Associating this VM artifact with an entire topology accomplishes two things:

 

-          It correctly models the scenario where the entire topology gets deployed in one single action (just by deploying the VM)

-          In addition, it provides a model of the topology inside the VM

 

I had an action item to flesh out this approach by adding prose to the latest draft of the Simple Profile doc.

 

However, during our discussion 2 weeks ago Luc observed that this approach can’t really be supported by TOSCA today since the only way to deploy things is by running scripts on Compute nodes that “host” software components. Luc is correct, of course, and the use case I’m proposing doesn’t really fit this simple model.

 

Based on Luc’s comment, I suggest that we take a little detour and discuss whether it might be time to extend TOSCA’s deployment model a bit. The “run a script on a Compute node” approach works fine for cloud-based software applications, but not for much else. For example, in ETSI NFV, the assumption is that things will get deployed by making API calls to a Virtual Infrastructure Manager (VIM). VIMs could represent cloud management systems such as OpenStack, but also SDN controllers such as OpenDaylight. If we want TOSCA to be adopted as a “universal” modeling language for all types of entities that need to be deployed dynamically, then we need to support the scenarios where things get deployed by mechanisms other than running scripts on Compute nodes. In fact, the approach of making API calls will likely be a more common scenario than running scripts. The software that makes these API calls will run on (or with) the orchestrator, not on the entities that are being deployed.

 

I’d like to use the meeting on Tuesday to discuss these scenarios:

 

-          How do we specify in TOSCA that “operations” are to be executed by the orchestrator (rather than by Compute nodes)?

-          If the orchestrator runs software that implements these operations, how do we protect the orchestrator from faulty (or malicious) code?

-          Are there any other security concerns?

-          Can interfaces mix and match Compute-hosted scripts and Orchestrator-hosted scripts? If so, are there any synchronization issues between these two types?

 

I’m sure there are other aspects of this approach to discuss. If there are no objections, then I’d like to make this the topic of Tuesday’s discussion.

 

Thanks,

 

Chris

 



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]