OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

tosca message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: artifact processing


For tomorrow’s Simple Profile meeting, I suggest we keeping thinking about how to “formalize” mechanisms that describe how artifacts need to be processed.

Just to recap: most (if not all) of the prose in the document uses examples where artifacts are “install scripts” that need to be run on a “Host”, where a host is assumed to be a Compute node that is the target of a HostedOn relationship.

However, in practice we need to be able to handle artifacts other than install scripts. I can think of the following four different types of artifacts (there may be others):

  1. Install scripts: like the install scripts just described
  2. API scripts: scripts that “deploy” nodes by making API calls to an external entity (e.g. Python scripts that call OpenStack or OpenDaylight APIs)
  3. Playbooks/recipes (e.g. Ansible playbooks, or Chef recipes)
  4. Images: “snapshots” of deployed entities.

 

Each of these types of artifacts requires a different mechanism for getting the artifact deployed. Said a different way, each of these types of artifacts may need to get “processed” differently. This means that in order to fully specify operations, we can’t just specify the artifact for the operation, we also need to be clear about the processor that is needed to process that artifact:

                operation: <artifact> + <artifact processor>

Flexible artifact processing, then, requires the following:

  1. Specifying the type of processor required for the artifact
  2. Specifying any configuration parameters for the artifact
  3. Specifying tenant/user-specific parameters for the artifact

 

Specifying the type of processor

Ideally, each type of artifact would have a unique artifact processor, which would allow us to “standardize” on artifact processors based on the type of artifact. However, how do we handle similar artifact that can belong to multiple types, for example:

-          A Python script could be an install script to be run on a Host

-          A Python script could be an API script to be run by the Orchestrator

If we statically “define” artifact processor types, we can’t base this on file extensions of artifact types.

Processor configuration

In order to “user” a processor, we may need configuration parameters for this processor. This could involve:

-          DNS names (or IP addresses) for contacting the processor (e.g. Chef servers, or API servers).

In some cases, the processor may not already be running, in which case the processor itself might need to get orchestrated (e.g. using TOSCA). In this case, the configuration parameters would be the result of the orchestration, but we would need a CSAR file representing the processor.

Tenant-Specific parameters

Some processor-related parameters may be necessary to “use” the processor, for example user credentials. We may need to specify those.

 

Let’s discuss if this is the “right” way to think about artifact processing, and if so how do we reflect this in the TOSCA spec.

 

Thanks,

 

Chris

 

 



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]