OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

tosca message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [tosca] tosca.nodes.Compute input data questions


Hi Luca,

 

I agree, an end-user should not have to worry about low-level details such as memory sizes or even scaling. An end-user should just be able to select a service from a catalog and select from a number of different options that specify end-user expectations for performance, reliability, etc. Ideally the system will then figure out the best topology template for satisfying the user’s requests.

 

In my opinion, this can be done in one of two ways:

 

1.       Either the application designer (i.e. the architect that creates the topology templates) can create a number of pre-configured templates for different combinations of scalability, reliability, performance as well as combinations of supporting technologies (e.g. your MySQL vs. PostgressSQL example). The system will select the appropriate template from the list based on user preferences. Note that this approach will result in a large number of templates.

2.       Alternatively, the orchestrator could construct the appropriate topology template “on-the-fly” in response to user inputs (based on policies or by combining previously defined sub topologies etc.).

 

Both approaches are possible with the Tosca standard and I don’t believe that what you’re asking for requires more additions to the standard. Instead, it requires significantly more sophistication in the orchestrators (which is what some of us are focusing on).

 

On a related note, there is a Tosca sub-group focused on using Tosca for orchestration of Network Functions Virtualization (NFV) deployments. The NFV spec includes the notion of a “deployment flavor” which is intended to be used in a way similar to what you’re describing. There hasn’t been much work done yet on “standardizing” the deployment flavor concept in Tosca but this is something the group will be focusing on over the next couple of months.

 

Chris

 

From: tosca@lists.oasis-open.org [mailto:tosca@lists.oasis-open.org] On Behalf Of Luca Gioppo
Sent: Friday, June 05, 2015 12:32 AM
To: TOSCA
Subject: RE: [tosca] tosca.nodes.Compute input data questions

 

The problems that I see are:

option 1)

- User could not be skilled enough to judge the CPU + RAM + Disk combination and would be better served with a "sizing" object that wraps the complexity of the deployment.  Also consider that the end user of a TOSCA file could and should not be the one that wrote it so he has not the knowledge of the correct combination of sizing values in a complex architecture (where do I add the CPU on the web tier, on the DB?)

- the combination is not something that can be improvised it has to depend on sizing considerations and each tier need to scale to the needed dimension that that application require (or better that the archect of that application designed) so we need to be able to associate to the "sizing" object a SET of values

 

option 2)

Scalability should be an option for customer (and application designed to dynamically scale) that want to scale ... as a customer I could not want to pay more in the cloud for my application scale out; maybe I'm happy of having a degraded services sometimes since that is my money.

But when I choose I planned my investment considering the sizing that thsat application proposed: I went to the catalogue saw application1 that offers 3 sizing, I see I need size 2 and choose that one that means a total of 4 CPU 6 GB ram and 50 GB disk + 1 public IP; I buy all that stuff in the cloud and I go with the deploy.  If I'm a public institution I normally plan ahead and have little chances to change it.

 

My point here is tied to the end usage of the TOSCA standard that is offering an easy approach to deploy in cloud starting from provisioning and we need a way to ease the understanding of these aspects of the deployment customization right in the standard, because is true that I could describe these things outside, but than the standard does not help me enough and I will have to develop something more around it.

 

My idea is that adding some "definitions" "somewhere" we could allow for parametrizing some SET of values in a way that we can also add the concept of related parameters in an easy way (this can be of use not only in the sizing example).

It could be nice if we could chhose betwee different YAML macro but I'm afraid that the specification does not allow it.

Another option would be allowing for different "definition file" and some more logic in the tosca-metadata file where we could address the choosing of different deployment solutions - this could also give further meaning to the capabilityes and so on that is: do you prefer the application on MySQL or on PostgreSQL this could send the use on "defnintion that include the MySQL node or the PostgreSQL one in the same csar.

 

This could be a moment of "TOSCA starter configuration moment" that the orchestrator could present to the end user (if present) in case of multi choice deployment options.

 

Luca

 >  _________________________________________________________________________

 >  

 >  

Il 5 giugno 2015 alle 1.01 Chris Lauwers <lauwers@ubicity.com> ha scritto:

 >  

 >  

Hi Luca,

 

For your use case, what you want is to “parameterize” your Tosca model so you can “customize” at instantiation time. There are a couple of ways to do this:

 

1.       If you simply want to specify different CPU and memory requirements for different classes of customers, you could provide input values that set CPU and memory properties when you deploy the template.

2.       If you’d like to do something more sophisticated that affects the topology (e.g. horizontal scaling) then this might be possible by setting the “scalable” capability and controlling that capability through policies. Support for policies is currently being added to the Simple Profile spec.  

 

Thanks,

 

Chris

 

 

From: tosca@lists.oasis-open.org [mailto:tosca@lists.oasis-open.org] On Behalf Of Luca Gioppo

 >   Sent: Thursday, June 04, 2015 3:24 AM

 >   To: TOSCA

 >   Subject: [tosca] tosca.nodes.Compute input data questions

 

Again on the Compute.

The CPU, ram etc data could be chosen by the end user from a set of given option decided by the application implementor.

This could be for preparing the final architecture between different sizing option.

 

My problem is I espect to use the content of the tosca file to present the end user an element in a catalogue so that he can choose to deploy that application.

Consider that the application is something like a coplex public administration solution (like a transparency portal imagine at least 3 virtual machines with lot of software on them)

 

The goal is to present the end user the choice of different sizing (small municipality, city, region) this means that we have at least to have different sizing for the VM so different data in the Compute node description depending on the choice.

Considering the option where we do not scale VM but just CPU and RAM, is it wise to use a single tosca file (and archive) to describe all the options?

If no this means that I have to have 1 tosca csar for each sizing where the only things that change are the values in the compute nodes?

Is there a way to define in the standard (maybe in the TOSCA-Meta the association between some sizing description and the proper tosca description)?  This option would also solve the case where sizing require a different set of machines like clustering etc.

 

Thanks

Luca

 >   



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]