OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

tosca message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [tosca] TOSCA artifact processing


Hi Luca,

Thanks for your review and comment.  Let me try to review what you sent one point at-a-time and provide my thoughts.

1) the initial part of the document, in my humble opinion, should end up in an appendix since, written in that place, anticipate too much of a specification that the reader has not yet known, it introduces things that are not normative and risk to confuse users.

We had discussed breaking apart chapter 2 (i.e., the "by example" chapter) from the rest of the document at some point, but agreed to keep it together for convenience of end-user/editor in order not to have to chase down or track multiple documents.  However, many "modern" specs. (e.g., produced by Google and others) often just specify "by example" and go no further into formalization; it was a concerted decision to reach this audience for this version (simple profile).

At this point, decoupling is not desirable and would produce great overhead both for editors and for process; I would prefer keeping it all together.

We COULD add a pre-amble stating the "layout" of the document and the approach taken.
 
2) The standard never states clearly where the scripts have to be executed, there are a couple of sentences where the reader can "infer" that they should be executed IN the node, but that is too generic.  Also the document speak about tools like puppet or chef, but no examples are provided, in this case I suggest that we either add some examples or remove the reference since it keeps you waiting for more and than you get disappointed (ref 2746).

We have discussed this and this is a top priority for addressing artifact processing.  In fact, we discussed language for this that Chris is starting on, but we admit we have many cases where the basic rules will be challenged by additional use cases (where we do not have Compute hosts for example).
 
3) One of the things I never understood in this standard is "how on a standard that allows to define some custom nodes and expect it to be inter-operable manage to work unless it states precise relationship or a starting point" there is no starting point in the spec nor is explained that the orchestrator needs to create a graph using the relationships and pick the root nodes (there could be more than 1) and start working on those as the first thing to create.  Also since we know the domain those nodes should be of some numbered, known and normative typology (Compute or Container or network or storage) if we end up having a DB as a root node the question is where to we place it? Is it allowed or the orchestrator should fire an error like "wrong root node type"?

Custom nodes are not interoperable.  Interop. is achieved by having normative (Node) Types where they represent well-known/well-understood (software or hardware) functions (services).  If a node does not have a relationship to Compute, then it is treated as a standalone service that the provider needs to start/provide access to in its Cloud platform (Compute/hosting implementation for the service is left up to them).  For example. a DBMS could be hosted service the Cloud provider starts and provides access to on behalf of the user (e.g., DB2, MySQL, CouchDB).  

Also, the orchestrator impls. are free to provide their own logic on handling "unknown" types (where artifacts are not provided for deployment or installation) and error processing; we have decided not to dictate these things in describing TOSCA as this would make it more "rigid" as we would have to provide a protocol/framework for transport (APIs, HTTP, etc) and we really did not want to go there.
 
4) in the spec do we define errors that the orchestrator need to fire?

No, as described above; this is as intended.
 
5) the execution of the artifact could be too much dependent on the way the orchestrator is implemented thus not interoperable: in my opinion the spec should just require all the proper information to be written in the TOSCA file as DATA; a script is an imperative piece of the topology and contrast with all the declarative approach.

It is our intent to recognize well-known Artifact Types and discuss their (interoperable) processing (including VMs and Containers).  This provides interop.. If we as a WG/TC want to add more normative artifact types, describe their "processors"

In fact, we discussed at our last WG call the formalization of Artifact Types (using well-known or document mime types or file extensions) and their relationships to processors.  The puppetmaster/chef server example is a good use case that we also discussed where the pattern would be "provide access to some external master or controller" where the orchestrator would provide the proper runtimes/tooling based upon the artifact (script) Type and provide access to an existing server (master) where the clients would connect to.   A separate (single) service template/model could be used to establish the base server/master node the others (clients) would connect to.
 
BTW: on 1.50.3.1 I read "TOSCA implementation currently does not allow concurrent executions of scripts implementation artifacts (shell, python, ansible, puppet, chef etc.) on a given host." are we sure that all implementations work this way? Or we want to state that the implementation MUST not allow concurrent execution? Can this be more clear?

It seemed clear when we wrote it and still seems clear.  That is, "impl. artifacts (scripts) MUST be run sequentially" as listed in the model (nodes) to guarantee an outcome (and avoid scripts/tools that are not process or thread safe)... if we want to say the reverse (or more).

Kind regards,
Matt

STSM, Master Inventor, IBM Open Cloud Technologies and Standards
Team Lead, OpenWhisk Open Eco-system
Chair, Lead Editor OASIS TOSCA Simple Profile WG,
Co-Chair OASIS TOSCA Interop. SC,
Founder of DMTF Cloud Audit (CADF) Standard
mrutkows@us.ibm.com,  mobile: 512-431-5002




From:        Luca Gioppo <luca.gioppo@csi.it>
To:        Chris Lauwers <lauwers@ubicity.com>, "tosca@lists.oasis-open.org" <tosca@lists.oasis-open.org>
Date:        11/15/2016 03:03 AM
Subject:        Re: [tosca] TOSCA artifact processing
Sent by:        <tosca@lists.oasis-open.org>





My comments on some of the points:
1) the initial part of the document, in my humble opinion, should end up in an appendix since, written in that place, anticipate too much of a specification that the reader has not yet known, it introduces things that are not normative and risk to confuse users.
 
2) The standard never states clearly where the scripts have to be executed, there are a couple of sentences where the reader can "infer" that they should be executed IN the node, but that is too generic.  Also the document speak about tools like puppet or chef, but no examples are provided, in this case I suggest that we either add some examples or remove the reference since it keeps you waiting for more and than you get disappointed (ref 2746).
 
3) One of the things I never understood in this standard is "how on a standard that allows to define some custom nodes and expect it to be inter-operable manage to work unless it states precise relationship or a starting point" there is no starting point in the spec nor is explained that the orchestrator needs to create a graph using the relationships and pick the root nodes (there could be more than 1) and start working on those as the first thing to create.  Also since we know the domain those nodes should be of some numbered, known and normative typology (Compute or Container or network or storage) if we end up having a DB as a root node the question is where to we place it? Is it allowed or the orchestrator should fire an error like "wrong root node type"?
 
4) in the spec do we define errors that the orchestrator need to fire?
 
5) the execution of the artifact could be too much dependent on the way the orchestrator is implemented thus not interoperable: in my opinion the spec should just require all the proper information to be written in the TOSCA file as DATA; a script is an imperative piece of the topology and contrast with all the declarative approach.
This is why in my implementation I used the template concept so in the nodes I have properties and each node has templates where the properties get substituted; this allows for the orchestrator to change the template and work differently so if the template will end up as a bash script the orchestrator will know that it will have to execute it on the node, but if is a puppet manifest it knows that it will have to write something in the puppetmaster and than the VM will present itself to it and do the installation.
In this last case I transform all the TOSCA file in a puppet manifest apart fro the root node that I create externally from puppet talking directly with the cloud environment.
In my case I do many operations in the "orchestrator environment" (I create new Docker images, prepare puppet stuff) than just fire up the known normative root elements that get a "well know" operation set.
In my case the hosted-on relationship is not treated uniformly (I should really specialize it one of these days) since being hosted-on a root node is different thatn being hosted-on another generic node.
For me an artifact is a SQL file that represent the dump of a DB or the reference to an ISO image I have to use, NOT a bash script that does part of the orchestrator job: in my opinion that approach is wrong since breaks the declarative approach we state we want to use in TOSCA.  If you have a bash you know that you are giving away a CSAR that is not interoperable by definition (chances that that script will work across Linux distributions is low and for sure just Linux), also you are interfering with the orchestrator assuming that it is able to execute scripts in the way you expect (where, how, when etc) and as Chris pointed out all this detail is missing.
 
 

BTW:

on 1.50.3.1 I read "TOSCA implementation currently does not allow concurrent executions of scripts implementation artifacts (shell, python, ansible, puppet, chef etc.) on a given host." are we sure that all implementations work this way? Or we want to state that the implementation MUST not allow concurrent execution? Can this be more clear?

 

Sorry for the long post.

Luca
 >  _________________________________________________________________________
 >  
 >  

Il 14 novembre 2016 alle 21.41 Chris Lauwers <lauwers@ubicity.com> ha scritto:
 >  
 >  
I took an action item during last week’s Simple Profile meeting to capture our discussions in the latest version of the Simple Profile spec. However, I’m struggling a bit with how to start, since there doesn’t seem to be a single section in the document where the deployment process is described.
 
-          In the early (introductory) sections of the document, we show service template examples that use normative node types and state that “orchestrators are expected to know how to deploy all normative node types”.  I understand that this decision was made the keep the profile “Simple”, but I believe that may limits what can be deployed using TOSCA to only very simple service topologies.
-          The document then introduces examples that introduce scripts, but it suggests that those scripts are used to “extend” the behavior of the Normative Types in areas where some customization is needed. However, I can’t find a place in the document that definitively states where these scripts are to be executed. I think the descriptions in the document imply that these scripts are supposed to be executed on the Compute node that is the target of the HostedOn relationship of the node, but if that’s the case we should state that explicitly.
-          The document should also prescribe those cases where the “run the script on the HostedOn target” approach doesn’t work:
o   Some nodes have a HostedOn relationship that doesn’t point to a Compute node. For example, DB nodes are HostedOn DBMS systems. Those DBMS systems in turn are HostedOn a Compute node. Should we modify the “rule” to say that an orchestrator needs to follow the HostedOn chain until it hits a Compute node?
o   Some nodes may not have a HostedOn requirement. Luc suggested that for those nodes, they rule should be that scripts need to be run in the context of the Orchestrator rather than in the context of a Compute node. Is this an acceptable extension of the rule?
o   Some nodes may have a HostedOn relationship that doesn’t ever terminate in a Compute node. For example, the Docker use cases in the spec don’t use Compute nodes. If there is a need to run additional configuration scripts, it doesn’t seem like there is a way to do this in a portable fashion.
o   Some nodes may have a HostedOn relationships, but scripts/artifacts associated with that node should not be run on the HostedOn target (e.g. they may have to be processed by a puppet master instead).
-          There are also inconsistencies between implementation and deployment artifacts:
o   Most of the examples in the text use implementation artifacts and deployment artifacts interchangeably, but the Interface and Operation specifications only talk about implementation artifacts. There isn’t any mention of deployment artifacts.
o   Processing of deployment artifacts gets two paragraphs (in section 5.8.4.3) but that section doesn’t really prescribe a definitive way of deploying such artifacts.
o   From the prose, it’s not clear if the “dependencies” section in Operations specifications only apply to implementation artifacts or also to deployment artifacts
-          Derek mentioned last week that in order to process some types of artifacts, the orchestrator may need to deploy an entire subtopology that acts as the “processor” responsible for the deployment. Should it be possible to specify this subtopology in a service template?
 
In any event, apologies for not getting any prose written but I’m hoping that the points above can help guide the discussion tomorrow.
 
Thanks,
 
Chris
 
 >  




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]