[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: [tosca] RE: artifact processing
Hi Chris, Luca and others, First of all, I wish you a Happy new year and lot of success in your respective project and in our common TOSCA work
J Now switching back to the technical discussion, I feel personally that there is multiple subject that are not exclusive the one from the others. My feeling is that: Lucas is expressing TOSCA modelling best-practice in the idea that modellers should first model abstract components with all the properties/attributes/capabilities/req etc. before thinking on a specific
implementation. This is for sure the first level of portability of a template that allow people to have a way to implement a component in a different way if for some reasons an orchestrator does not support the implementation they have built. However, I think that it is a great value of TOSCA not to care only for the modelling but also to try to allow portability of the work that people are going to push in implementing the abstract components.
There may be BTW multiple implementations of a given components with shell, python or extended artifacts that we don’t yet specify officially Puppet/Ansible/Chef etc. Where I agree with Lucas is that while I think we should detail where and how a given artifact should be executed and how the inputs/outputs are provided to the artifact and fetched after the execution
is complete, we should not impose on implementers how they build their orchestrators, how they connect to machines and so on as there is many concerns for that (I expressed them in a mail last year but maybe to some people from the yaml ad-hoc). That said there is something that could I think allow people to express Chris Idea of artifact executors. Somehow as we support Shell script and Python as official artifacts people could write a
wrapper in shell or python to call another type of artifact (you can call this wrapper in your own way to call python or shell artifacts). We could then think of a way to specify these artifact processors extentions that could be given to an orchestrator so
he can, if he does not manage out of the box with all it’s great features (security, agents, better whatever) a given artifact he could call the processor as an usual TOSCA artifact with maybe one two additional parameters. Now the last discussion point that was related to the artifact ‘execution target’ which is not in my opinion related to the type of artifact and should rely on the host/vs no host elements in TOSCA.
Basically, an ansible artifact can be executed on the ansible master (when calling APIs for example to start an Amazon VM) or target a host (the Started Amazon VM for example) to install something on it. If I summary my thinking what we should do is the following:
-
We should maybe express Lucas best practice somewhere and push people to write abstract components before implementing them in an extended type.
-
We should work on identification of additional artifact (Ansible, Puppet, Chef, Other ?) we want to support as implementation artifacts for TOSCA and specify how to
o
Provide inptus
o
Get ouputs
o
Provide execution target for the execution if any is required by the “official artifact processor” (Ansible Runtime, Chef, Puppet) if applicable (host and connexion parameters).
-
Work on elaboration of execution ‘wrapper executor’ syntax so people could extend artifact execution through already supported artifacts.
o
These wrappers won’t be required and if an orchestrator supports the artifact out of the box then there is no need for him to use a wrapper executor. At some point of time we may have, for non-official artifacts some orchestrator that will support them and some that won’t (as long as there is no ‘wrapper executor’). As long as they are stated
as ‘extention artifacts’ I think this is fine and people that want to use them know exactly the limitation they may encounter. Luc From: <tosca@lists.oasis-open.org> on behalf of Luca Gioppo <luca.gioppo@csi.it> Hi, I'm rethinking on the artifact processing topic and I want to propose an alternative point of view. What if we are looking at the problem with a wrong approach? The problem of "where to process the artifact" is trying to solve the HOW the orchestrator have to work, but this is an "imperative" problem. The real trouble that I also had was to ask myself: "I have this script in the TOSCA archive how do I instruct the orchestrator to tell it where does it have to execute it?" The question was wrong since I should not have a script in the TOSCA archive since that is again IMPERATIVE. We cannot mix declarative and imperative. The real problem is that we have an oversimplified set of properties for many nodes and if we look at the "code in the script" we find many things that should be properties of
nodes. This is obviously due to the fact that we needed to have a simple example to work with, but to have a simple working example we needed to place the missing information hardcoded
somewhere and this ended up in the shell script. Probably the better solution could be to place all the needed information in the proper node (with proper relations), than the orchestrator (that could be implemented in any way)
will use that info to implement the topology. In my case I do not implement much in the orchestrator, but use an existing DevOp tool like puppet and all the properties for the node goes inside a template that is associated
with the node. I do not have any script in the tosca archive and leverage on existing tools (that I do not have to code myself and that have a wide amount of modules around for many things). The properties I use in the nodes are very detailed (like the reverse proxy rule of the httpd.conf of apache for example), but this allows the orchestrator to use the information
and do the things it likes with it and could be potentially be much more interoperable than a shell script. In my case I use puppet and could either add the various manifest that the orchestrator dynamically generate on a puppet master and the newly created machine get the catalog and
applies it or I could ssh all the files and use a puppet apply on the VM. That is the imperative work that is related to how I implemented the orchestrator and does not interest the TOSCA archive designer. I believe that if we look at how those various DevOp tools represent for example apache we could come out with a better example of an apache node that has all the right properties
that are interoperable and represent a real production installation set of information. Than the orchestrator could also work by invoking shell scripts, but will compile the final one from a template using the provided information in the tosca file. I believe that this should be the philosophy of TOSCA. Luca > _________________________________________________________________________ > >
> |
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]