All potential
intended variability of each type of service in the catalog must be easily expressible as a function of the inputs. Note that this creates two levels of âintent basedâ, and I am not sure that was clear in the previous work on TOSCA for intent based modeling:
ÂÂÂÂÂÂÂÂÂ
The intent of the Day 0 designer to express in as few templates as possible the intended variability offered to the Day 1+ users
ÂÂÂÂÂÂÂÂÂ
The intent of the Day 1+ users, expressed by a) selecting a template from the catalog and b) providing input values
I think there's great no need to say so, because this is already the case. :) But it won't hurt to make it clear.
Â
If the expressive power of inputs is too low (or too hard to use), then the users will begin to spawn variants of the templates for those cases that are not expressible.
I hope we all agree that that is undesirable.
Â
Alternatively, we face another bad scenario, that to express the intended variability, users invent their own home-grown formats to use as input to some TOSCA-generating
scripts. That road, in my opinion defies the purpose of TOSCA as a standard.
I think this is inevitable, but I also don't think it's the end of the world.
There are two ways to approach variability:
1) Integrate variability into TOSCA. This, I think is undesirable. Take a look at Helm charts -- what an unholy mess. Not only can they not be validated at design time, they are almost impossible to read, modify, or fix. They even allow for (code) plugins that do custom variability. This should be the poster child for the opposite of what TOSCA purports to be.
2) Add a preprocessor to the toolchain. You called it a "TOSCA-generating script" but it also can be some kind of template modification (not full generation). I'm not a fan of text templating for YAML, but otherwise it is possible to create some other higher-level templating that is purpose-built for YAML, and perhaps even purpose-built for TOSCA. (This is an idea for a project if someone wants to contribute to the ecosystem!)
The idea of TOSCA being part of a toolchain is central to how I think about it. Because I deal with already-existing orchestrators such as Ansible and Terraform or larger pyramids of orchestrator, I must think of TOSCA as being an input (albeit a processed input, and even a continuous input for Day 2 when I deal with attributes) into the next step in the process. I am not going to, and cannot, fork my orchestration universe in order to redesign it around a specific version of TOSCA.
And so I don't find it particularly offensive to have something before TOSCA in the toolchain. Another reason for this is that I have a lot of trust in TOSCA's
validation capability. If the pre-processor creates a broken design then
the TOSCA processor will emit an error and we won't continue in the
toolchain. (Yet another reason for my adamant resistance to allow
for requirements to "dangle".)
Actually, I had some thoughts about enhancing the CSAR format to
specifically allow for pre-processors. Basically some kind of META
directive that instructs the toolchain to do something else first before
using the ".tosca" files therein. This would allow TOSCA inventory systems (such as used by ONAP) to continue using CSAR, at least, in a standard way. (A preprocessor for an entire CSAR would be quite painful.)
And by the way preprocessing isn't just for variability of topologies, but also for profiles. I'm thinking of something simple like specifying the profile version somewhere outside of TOSCA and then injecting it, maybe into TOSCA metadata. Or it can be a "last tested" timestamp, etc.
So, again, my point is that variability it is inevitable and it's perhaps worthwhile to at least address it, if not support it, in our specs. (Puccini does talk about it in its FAQ.)