OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

tosca message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [tosca] TOSCA artifact processing


My comments on some of the points:
1) the initial part of the document, in my humble opinion, should end up in an appendix since, written in that place, anticipate too much of a specification that the reader has not yet known, it introduces things that are not normative and risk to confuse users.
 
2) The standard never states clearly where the scripts have to be executed, there are a couple of sentences where the reader can "infer" that they should be executed IN the node, but that is too generic.  Also the document speak about tools like puppet or chef, but no examples are provided, in this case I suggest that we either add some examples or remove the reference since it keeps you waiting for more and than you get disappointed (ref 2746).
 
3) One of the things I never understood in this standard is "how on a standard that allows to define some custom nodes and expect it to be inter-operable manage to work unless it states precise relationship or a starting point" there is no starting point in the spec nor is explained that the orchestrator needs to create a graph using the relationships and pick the root nodes (there could be more than 1) and start working on those as the first thing to create.  Also since we know the domain those nodes should be of some numbered, known and normative typology (Compute or Container or network or storage) if we end up having a DB as a root node the question is where to we place it? Is it allowed or the orchestrator should fire an error like "wrong root node type"?
 
4) in the spec do we define errors that the orchestrator need to fire?
 
5) the execution of the artifact could be too much dependent on the way the orchestrator is implemented thus not interoperable: in my opinion the spec should just require all the proper information to be written in the TOSCA file as DATA; a script is an imperative piece of the topology and contrast with all the declarative approach.
This is why in my implementation I used the template concept so in the nodes I have properties and each node has templates where the properties get substituted; this allows for the orchestrator to change the template and work differently so if the template will end up as a bash script the orchestrator will know that it will have to execute it on the node, but if is a puppet manifest it knows that it will have to write something in the puppetmaster and than the VM will present itself to it and do the installation.
In this last case I transform all the TOSCA file in a puppet manifest apart fro the root node that I create externally from puppet talking directly with the cloud environment.
In my case I do many operations in the "orchestrator environment" (I create new Docker images, prepare puppet stuff) than just fire up the known normative root elements that get a "well know" operation set.
In my case the hosted-on relationship is not treated uniformly (I should really specialize it one of these days) since being hosted-on a root node is different thatn being hosted-on another generic node.
For me an artifact is a SQL file that represent the dump of a DB or the reference to an ISO image I have to use, NOT a bash script that does part of the orchestrator job: in my opinion that approach is wrong since breaks the declarative approach we state we want to use in TOSCA.  If you have a bash you know that you are giving away a CSAR that is not interoperable by definition (chances that that script will work across Linux distributions is low and for sure just Linux), also you are interfering with the orchestrator assuming that it is able to execute scripts in the way you expect (where, how, when etc) and as Chris pointed out all this detail is missing.

 

 

BTW:

on 1.50.3.1 I read "TOSCA implementation currently does not allow concurrent executions of scripts implementation artifacts (shell, python, ansible, puppet, chef etc.) on a given host." are we sure that all implementations work this way? Or we want to state that the implementation MUST not allow concurrent execution? Can this be more clear?

 

Sorry for the long post.

Luca

 >  _________________________________________________________________________

 >  

 >  

Il 14 novembre 2016 alle 21.41 Chris Lauwers <lauwers@ubicity.com> ha scritto:

 >  

 >  

I took an action item during last week’s Simple Profile meeting to capture our discussions in the latest version of the Simple Profile spec. However, I’m struggling a bit with how to start, since there doesn’t seem to be a single section in the document where the deployment process is described.

 

-          In the early (introductory) sections of the document, we show service template examples that use normative node types and state that “orchestrators are expected to know how to deploy all normative node types”.  I understand that this decision was made the keep the profile “Simple”, but I believe that may limits what can be deployed using TOSCA to only very simple service topologies.

-          The document then introduces examples that introduce scripts, but it suggests that those scripts are used to “extend” the behavior of the Normative Types in areas where some customization is needed. However, I can’t find a place in the document that definitively states where these scripts are to be executed. I think the descriptions in the document imply that these scripts are supposed to be executed on the Compute node that is the target of the HostedOn relationship of the node, but if that’s the case we should state that explicitly.

-          The document should also prescribe those cases where the “run the script on the HostedOn target” approach doesn’t work:

o   Some nodes have a HostedOn relationship that doesn’t point to a Compute node. For example, DB nodes are HostedOn DBMS systems. Those DBMS systems in turn are HostedOn a Compute node. Should we modify the “rule” to say that an orchestrator needs to follow the HostedOn chain until it hits a Compute node?

o   Some nodes may not have a HostedOn requirement. Luc suggested that for those nodes, they rule should be that scripts need to be run in the context of the Orchestrator rather than in the context of a Compute node. Is this an acceptable extension of the rule?

o   Some nodes may have a HostedOn relationship that doesn’t ever terminate in a Compute node. For example, the Docker use cases in the spec don’t use Compute nodes. If there is a need to run additional configuration scripts, it doesn’t seem like there is a way to do this in a portable fashion.

o   Some nodes may have a HostedOn relationships, but scripts/artifacts associated with that node should not be run on the HostedOn target (e.g. they may have to be processed by a puppet master instead).

-          There are also inconsistencies between implementation and deployment artifacts:

o   Most of the examples in the text use implementation artifacts and deployment artifacts interchangeably, but the Interface and Operation specifications only talk about implementation artifacts. There isn’t any mention of deployment artifacts.

o   Processing of deployment artifacts gets two paragraphs (in section 5.8.4.3) but that section doesn’t really prescribe a definitive way of deploying such artifacts.

o   From the prose, it’s not clear if the “dependencies” section in Operations specifications only apply to implementation artifacts or also to deployment artifacts

-          Derek mentioned last week that in order to process some types of artifacts, the orchestrator may need to deploy an entire subtopology that acts as the “processor” responsible for the deployment. Should it be possible to specify this subtopology in a service template?

 

In any event, apologies for not getting any prose written but I’m hoping that the points above can help guide the discussion tomorrow.

 

Thanks,

 

Chris

 

 >   



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]