CloudSlang Documentation¶
Contents:
Overview¶
Contents:
The CloudSlang Project¶
CloudSlang is a flow-based orchestration tool for managing deployed applications. It allows you to rapidly automate your DevOps and everyday IT operations use cases using ready-made workflows or create custom workflows using a YAML-based DSL.
The CloudSlang project is composed of three main parts: the CloudSlang Orchestration Engine, the CloudSlang language and the ready-made CloudSlang content.

CloudSlang Orchestration Engine¶
The CloudSlang Orchestration Engine is packaged as a lightweight Java .jar file and can therefore be embedded into existing Java projects.
The engine can support additional workflow languages by adding a compiler that translates the workflow DSL into the engine’s generic workflow execution plans.
CloudSlang Language¶
The CloudSlang language is a YAML-based DSL for writing workflows. Using CloudSlang you can define a workflow in a structured, easy-to-understand format that can be run by an embedded instance of the CloudSlang Orchestration Engine or the stand-alone CloudSlang CLI.
The CloudSlang language is simple and elegant, yet immensely powerful at the same time.
There are two main types of CloudSlang content, operations and flows. An operation contains an action, which can be written in Python or Java. Operations perform the “work” part of the workflow. A flow contains steps, which stitch together the actions performed by operations, navigating and passing data from one to the other based on operation results and outputs. Flows perform the “flow” part of the workflow.
CloudSlang Ready-Made Content¶
Although writing your own CloudSlang content is easy, in many cases you don’t even need to write a single line of code to leverage the power of CloudSlang. The CloudSlang team has already written a rich repository of ready-made content to perform common tasks as well as content that integrates with many of today’s hottest technologies, such as Docker and CoreOS. And, the open source nature of the project means that you’ll be able to reuse and repurpose content shared by the community.
CloudSlang Features¶
CloudSlang and its orchestration engine are:
- Process Based: allowing you to define the ‘how’ and not just the ‘what’ to better control the runtime behavior of your workflows.
- Agentless: there are no agents to set up and manage on all your machines. Instead, workflows use remote APIs to run tasks.
- Scalable: execution logic and distribution are optimized for high throughput and are horizontally scalable.
- Embeddable: the CloudSlang Orchestration Engine is distributed as a standard java library, allowing you to embed it and run CloudSlang from your own applications.
- Content Rich: you can build your own flows, or just use CloudSlang ready-made content.
Get Started¶
It’s easy to get started running CloudSlang flows, especially using the CLI and ready-made content.
Download, Unzip and Run¶
- Download the CLI with content zip file.
- Unzip the archive.
- Run the CloudSlang executable in the
cslang-cli/bin
folder. - At the prompt enter:
run --f ../content/io/cloudslang/base/print/print_text.sl --i text=Hi
- The CLI will run the ready-made
print_text
operation that will print the value passed to the variabletext
to the screen.
Docker¶
There are two CloudSlang Docker images. One (cloudslang/cloudslang) is a lightweight image meant to get you running CloudSlang flows as quickly as possible. The other image (cloudslang/cloudslang-dev) adds the tools necessary to develop CloudSlang flows.
cloudslang/cloudslang¶
This image includes:
- Java
- CloudSlang CLI
- CloudSlang content
To get and run the image: docker pull cloudslang/cloudslang
To run a flow with a CloudSlang prompt:
docker run -it cloudslang/cloudslang
- At the prompt enter:
run --f ../content/io/cloudslang/base/print/print_text.sl --i text=Hi
Or, to run the flow without the prompt:
docker run --rm cloudslang/cloudslang run --f ../content/io/cloudslang/base/print/print_text.sl --i text=first_flow
The CLI will run the ready-made print_text
operation that will
print the value passed to the variable text
to the screen.
cloudslang/cloudslang-dev¶
This image includes:
- Java
- CloudSlang CLI
- CloudSlang content
- Python
- Pip
- Vim
- Emacs
- SSH
- Git
- Atom
- language-cloudslang Atom package
To get the image: docker pull cloudslang/cloudslang-dev
Next Steps¶
Now that you’ve run your first CloudSlang file, you might want to:
- Watch a video lesson on how to author CloudSlang content using Atom.
- Learn how to write a print operation yourself using the Hello World example.
- Learn about the language features using the New Hire Tutorial.
- Learn about the language in detail using the CloudSlang Reference.
- See an overview of the ready-made content.
- Learn about the ready-made content.
- Learn about embedding CloudSlang or the CloudSlang Orchestration Engine into your existing application.
- Learn about the architecture of CloudSlang and the CloudSlang Orchestration Engine.
FAQ¶
What is CloudSlang?
CloudSlang is an open source project to automate your development and operations use cases using ready-made workflows. CloudSlang uses a process-based approach to orchestrating popular technologies, such as Docker and CoreOS in an agentless manner. You can use the ready-made CloudSlang content or define your own custom workflows that are reusable, shareable and easy to understand.
What are the use cases and focus for CloudSlang?
CloudSlang can orchestrate a wide variety of technologies. Currently, our ready-made content focuses on popular DevOps technologies, such as Docker and CoreOS.
What is the difference between orchestration and configuration management?
CloudSlang is not a traditional PaaS framework, although it can be used to orchestrate one. PaaS platforms such as OpenShift or Cloud Foundry focus primarily on common application stacks and architectures, and are designed to improve developer productivity by making it easy to develop and deploy new, simple applications.
CloudSlang, is designed to orchestrate complex, non-trivial, process-based workflows. For example, CloudSlang content allows you to integrate with the OpenShift or Cloud Foundry (Stackato) PaaS platforms to orchestrate application lifecycle creation.
How is CloudSlang different than configuration management tools like Chef, Puppet, Salt and Ansible? Can I use it with these tools?
Configuration management (CM) tools like Chef, Puppet, Salt and Ansible are great for configuring individual servers and preparing them for service. Given a server and a desired state, they will make sure to take all the required steps to configure that server so that it ends up in the desired state.
As a Runbook Automation open source product, CloudSlang will allow you automate many uses cases, such as application or server provisioning. It will take all the steps required to realize that application stack. This includes provisioning infrastructure resources on the cloud (compute, storage and network), assigning the right roles to each provisioned VM, configuring this CM (which is typically done by CM tools), injecting the right pieces of information into each tier, starting them up in the right order, continuously monitoring the instances of each tier, healing on failure and scaling tiers when needed.
CloudSlang can indeed integrate with CM tools as needed for configuring individual VMs, and in fact this a best practice. For example, CloudSlang provides ready-made content for integrating with Chef.
What is the quickest way to try out CloudSlang?
Follow the directions on the CloudSlang website or head over to the Get Started section to download CloudSlang and run your first CloudSlang content.
Which langages are supported by CloudSlang?
Python and Java operations are supported natively in CloudSlang.
What other technologies does CloudSlang integrate with?
CloudSlang was built out of the box to work with your favorite technologies. For example, not only are OpenStack and Docker technologies are supported, but also configuration management tools like Chef. There’s also support for bash (for *nix systems) and PowerShell (for Windows) and basic operations for REST or SOAP. To see a complete list of technologies that CloudSlang integrates with, see the ready-made content repository.
We’re not looking to replace great tools, we work with them. So many of the tools you’re used to working with are already supported by CloudSlang. The CloudSlang team, along with its growing open source community, are constantly expanding the list of tools we work with, so if you’re favorite tool isn’t supported yet, there’s a good chance it will be soon. Of course, we encourage and support contributions from the community. (In fact, this very answer you’re reading was contributed by a member of the community.)
CloudSlang¶
Contents:
Content¶
Ready-made CloudSlang content is hosted on GitHub in the cloud-slang-content repository. The repository contains CloudSlang content written by the CloudSlang team as well as content contributed by the community.
The cloud-slang-content repository contains ready-made CloudSlang flows and operations for many common tasks as well as content that integrates with several other systems.
The repository may contain some beta content. Beta content is not verified or
tested by the CloudSlang team. Beta content is named with the beta_
prefix.
The community is encouraged to assist in setting up testing environments for the
beta content.
For more information on the content contained in the repository, see the docs page.
Running CloudSlang Content¶
The simplest way to get started running ready-made CloudSlang content is to download and run the pre-packaged cslang-cli-with-content file as described in the Get Started section.
Alternatively, you can build the CLI from source and download the content separately. To build the CLI yourself and for more information on using the CLI, see the CLI section.
Note
When using a locally built CLI you may need to include a classpath to properly reference ready-made content. For information on using classpaths, see Run with Dependencies.
Running Content Dependent on Java Actions¶
Some of the content is dependent on Java actions from the cs-actions repository.
CloudSlang uses Maven to manage these dependencies. When executing an operation
that declares a dependency, the required Maven project and all the resources
specified in its pom’s dependencies
will be resolved and downloaded (if
necessary).
Running Content Dependent on External Python Modules¶
Some of the content is dependent on external python modules. To run this content follow the instructions found in the python_action section of the DSL Reference.
Contributing Content¶
We welcome and encourage community contributions to CloudSlang. Please see the contribution section to familiarize yourself with the Contribution Guidelines and Project Roadmap before contributing.
Hello World¶
The following is a simple example to give you an idea of how CloudSlang is structured and can be used to ensure your environment is set up properly.
Prerequisites¶
This example uses the CloudSlang CLI to run a flow. See the CloudSlang CLI section for instructions on how to download and run the CLI.
Although CloudSlang files can be composed in any text editor, using a modern code editor with support for YAML syntax highlighting is recommended. See CloudSlang Editors for instructions on how to download, install and use the CloudSlang language package for Atom.
Code files¶
Download
the code or
use the following instructions:
Create a folder examples and then another folder hello_world inside the examples folder. In the hello_world folder, create two new CloudSlang files, hello_world.sl and print.sl.
You should now have the following folder structure:
examples
hello_world
- hello_world.sl
- print.sl
Copy the code below into the corresponding files.
hello_world.sl
namespace: examples.hello_world
flow:
name: hello_world
workflow:
- sayHi:
do:
print:
- text: "'Hello, World'"
navigate:
- SUCCESS: SUCCESS
results:
- SUCCESS
print.sl
namespace: examples.hello_world
operation:
name: print
inputs:
- text
python_action:
script: print text
results:
- SUCCESS
Run¶
Start the CLI and enter the following command at the cslang>
prompt:
run --f <path_to_files>/examples/hello_world/hello_world.sl --cp <path_to_files>/examples/hello_world
Note
Use forward slashes in the file paths.
The output will look similar to this:
- sayHi
Hello, World
Flow : hello_world finished with result : SUCCESS
Execution id: 101600001, duration: 0:00:00.790
Explanation¶
The CLI runs the flow contained in the file passed to it using the --f
flag, namely hello_world.sl. The --cp
flag is used to specify the
classpath where the flow’s dependencies can be found. In our case, the flow refers
to the print
operation, so we must add its location to the classpath.
Note
If you are using a CLI without the content folder, specifying the classpath in this instance is not necessary.
The flow named hello_world
begins its workflow. The
workflow has one step named sayHi
which
calls the print
operation. The flow passes the string
"Hello, World"
to the text
input of the print
operation. The print operation performs its python_action,
which is a simple Python script that prints the input, and then
returns a result of SUCCESS
. Since the flow does not
contain any more steps the flow finishes with a
result of SUCCESS
.
YAML Overview¶
Before writing CloudSlang code it helps to have a good working knowledge of YAML. YAML is a human friendly, data serialization language that has become a popular choice for configuration files and other hand-crafted data files. CloudSlang uses YAML to define its flows and operations.
This section contains a brief overview of the common YAML syntax and the best practices for writing it. See the full YAML specification for more information.
Basics¶
The contents of a YAML file define a single data structure (graph) that is composed of nested nodes (mappings, sequences and scalars). Well crafted YAML files are easy to read, and can have comments in the right places to explain things better.
YAML tries to be human friendly, by minimizing the need for special syntax characters when the data is simple. The most common YAML special characters in YAML are:
:
- Between key/value pairs-
- Denotes a sequence entry#
- Starts a comment
Note
You should be aware that YAML also supports very complex data, and has some special characters that you need to be aware of. Even if you never need to use those features, YAML will fail to parse if you accidently use a special character incorrectly. This is pretty easily avoided and covered more below.
If you are familiar with the popular data language, JSON, then YAML should be easy to learn. Effectively YAML can be thought of as JSON with less syntax (quotes, brackets, etc), although YAML also supports the JSON style syntax. In fact, YAML is a strict superset of JSON.
Here are some basic YAML facts and guidelines:
- Structure is usually scoped by indentation
- Line comments can be used almost anywhere
- Strings values rarely need quotes
- YAML has 5 string quoting styles
Indentation Scoping¶
Much like the Python programming language, YAML uses indentation to denote a change in scope level. This means that leading whitespace is syntactically significant. Indentation is always achieved using spaces. Tabs are not allowed.
While any number of spaces can be used for a given scope, it a best practice to always use 2 spaces. This makes the YAML be more consistent and readable.
Note
The -
characters at the start of a sequence entry count as
indentation.
Example: a CloudSlang step (in this case named divider) contains do, publish and navigate keys
- divider:
do:
divide:
- dividend: ${input1}
- divisor: ${input2}
publish:
- answer: ${quotient}
navigate:
- ILLEGAL: FAILURE
- SUCCESS: printer
YAML calls the indentation style “block” and the JSON style “flow”. Flow style can be used at any point within the block style. Flow style doesn’t need quoting either. It is a best practice to only use flow style for small structures on a single line.
Example: above document using flow style
- divider:
do:
divide:
- dividend: ${input1}
- divisor: ${input2}
publish:
- answer: ${quotient}
navigate: [{ILLEGAL: FAILURE, SUCCESS: printer}]
Mappings (Hashes, Objects, Dictionaries)¶
Mappings (maps) are a set of key/value pairs. Each key and value is
separated be a colon (:
). The colon must be followed by a whitespace
character (space or newline). The value can be a scalar (string/number)
value, a newly indented mapping or sequence.
Example: a CloudSlang step’s navigate key is mapped to a mapping of results and their targets
navigate:
- ILLEGAL: FAILURE
- SUCCESS: printer
Sequences (Lists, Arrays)¶
Sequences (seqs) are denoted with a hypen and a space (-
) preceding
each entry.
Example: a CloudSlang flow’s possible results are defined using a list mapped to the results key
results:
- ILLEGAL
- SUCCESS
Scalars (Strings, Numbers, Values)¶
Scalars are single values. They are usually strings but (like JSON) can also be numbers, booleans or null values. If a value is quoted, it is always a string. If unquoted it is inspected to be something else, but defaults to being a string.
Strings can be denoted in several ways: unquoted, single quoted and double quoted. The best method for any given string depends on its content.
While most strings should be left unquoted, quotes are required for these cases:
- The string starts with a special character:
- One of
!#%@&*`?|>{[
or-
.
- One of
- The string starts or ends with whitespace characters.
- The string contains
:
or#
character sequences. - The string ends with a colon.
- The value looks like a number or boolean (
123
,1.23
,true
,false
,null
) but should be a string.
Multi-line strings can be written using a pipe (|
) to preserve line
breaks or a greater than symbol (>
) where each line break will be
converted to a space. Multi-line strings can also use the unquoted or
quoted styles above, but it is best practice to avoid that.
The double-quoted style is the only style that can support any character
string, using escape sequences like \n
, \\
, and \"
). Single
quoted strings only have one escape sequence: two single quotes (''
)
are used to put a single quote inside the single quoted string.
Example: a name of a CloudSlang flow is defined using the unquoted style
flow:
name: hello_world
Example: a string value is passed to a CloudSlang operation using the double quoted style
- sayHi:
do:
print:
- text: "Hello, World\n"
Example: the pipe is used in CloudSlang to indicate a multi-line Python script
python_action:
script: |
if divisor == '0':
quotient = 'division by zero error'
else:
quotient = float(dividend) / float(divisor)
Note
Learning the scalar styles and their specifics will help you write YAML files that are clear and concise.
Comments¶
Comments begin with the #
symbol following a whitespace character or
beginning of line.
# This is a line comment
flow: # Flow definition (trailing comment)
name: hello_world # This flow is called 'hello_world'
Conclusion¶
YAML is a simple yet complete data language. This means that most of the time, simple things are simple. You just need to be aware that some things have special meaning to YAML that you might not expect.
If you need more help, there are lots of resources about YAML on the web. You may want to check out the YAML Reference Card.
DSL Reference¶
CloudSlang is a YAML (version 1.2) based language for describing a workflow. Using CloudSlang you can easily define a workflow in a structured, easy-to-understand format that can be run by the CloudSlang Orchestration Engine (Score). CloudSlang files can be run by the CloudSlang CLI or by an embedded instance of Score using the Slang API.
This reference begins with a brief introduction to CloudSlang files and their structure, then continues with a brief explanation of CloudSlang expressions and variable contexts. Finally, there are alphabetical listings of the CloudSlang keywords and functions. See the examples section for the full code examples from which many of the code snippets in this reference are taken.
CloudSlang Files¶
CloudSlang files are written using YAML. The recommended extension for CloudSlang flow and operation file names is .sl, but .sl.yaml and .sl.yml will work as well. CloudSlang system properties file names end with the .prop.sl extension.
Since CloudSlang is YAML-based, proper indentation is crucial. For more information, see the YAML Overview.
There are four types of CloudSlang files:
- flow - contains a list of steps and navigation logic that calls operations or subflows
- operation - contains an action that runs a script or method
- decision - contains decision logic without an action
- system properties - contains a list of system property keys and values
The following properties are for all types of CloudSlang files. For properties specific to flow, operation, or system properties files, see their respective sections below.
Property | Required | Default | Value Type | Description | More Info |
---|---|---|---|---|---|
namespace |
no | – | string | namespace
of the file
|
namespace |
imports |
no | – | list of
key:value pairs
|
files to import | imports |
extensions |
no | – | – | information to be
ignored by the compiler
|
extensions |
Naming¶
Names that are interpreted by YAML as non-string types (e.g. booleans, numbers)
cannot be used without enclosing them in quotes ('
) to force YAML to
recognize them as strings.
Namespace Names¶
Namespaces can be named using alphanumeric characters (a
-z
, A
-Z
and 0
-9
), underscores (_
) and dashes (-
) while using a period
(.
) as a delimiter.
Since namespaces reflect the folder structure where their respective files are found, they cannot be named using names that are invalid as Windows or Linux folder names.
Namespaces are found in:
- system property fully qualified names
- flow, operation, decision and system properties namespaces
- import values
- step references
Variable Names¶
Variable names in CloudSlang files cannot contain localized characters. They
can be named using alphanumeric characters (a
-z
, A
-Z
and
0
-9
) and an underscore (_
), but may not begin with a number.
CloudSlang variable names must conform to both Python’s naming constraints as well as Java’s naming constraints.
Variable name rules apply to:
- inputs
- outputs
- published variables
- loop variables
Other Names¶
All other names can be named using alphanumeric characters (a
-z
,
A
-Z
and 0
-9
).
Since flow, operation and decision names must match the names of their respective files, they cannot be named using names that are invalid as Windows or Linux file names.
These rules apply to:
- import section aliases
- flow, operation and decision names
- step names
- result names
- navigation keys
- break keys
Uniqueness and Case Sensitivity¶
Inputs, outputs, results, publish values, fully qualified system properties and fully qualified executable names must be unique and are validated as case insensitive. When using any of the above, they must be referred to using the case in which they were declared.
Encoding¶
When using the CLI or Build Tool, CloudSlang will use the encoding found in the CLI configuration file or Build Tool configuration file for input values respectively. If no encoding is found in the configuration file, the CLI or Build Tool will use UTF-8.
Structure¶
The general structure of CloudSlang files is outlined here. Some of the properties that appear are optional. All CloudSlang keywords, properties and concepts are explained in detail below.
Flow file
Operation file
Decision file
System properties file
Expressions¶
Many CloudSlang keys map to either an expression or literal value.
Literal Values¶
Literal values are denoted as they are in standard YAML. Numbers are interpreted as numerical values and strings may be written unquoted, single quoted or double quoted.
Example: literal values
literal_number: 4
literal_unquoted_string: cloudslang
literal_single_quoted_string: 'cloudslang'
literal_double_quoted_string: "cloudslang"
Note
Where expressions are allowed as values (input defaults, output and result values, etc.) and a literal string value is being used, you are encouraged to use a quoted style of literal string.
Example: recommended style for literal strings
flow:
name: flow_name #expression not allowed - unquoted literal string
workflow:
- step1:
do:
print:
- text: "hello" #expression allowed - quoted literal string
Standard Expressions¶
Expressions are preceded by a dollar sign ($
) and enclosed in curly brackets
({}
).
Example: expressions
- expression_1: ${4 + 7}
- expression_2: ${some_input}
- expression_3: ${get('input1', 'default_input')}
Expressions with Special Characters¶
Expressions that contain characters that are considered special characters in
YAML must be enclosed in quotes or use YAML block notation. If using quotes, use
the style of quotes that are not already used in the expression. For example, if
your expression contains single quotes ('
), enclose the expression using
double quotes ("
).
Example: escaping special characters
- expression1: "${var1 + ': ' + var2}"
- expression2: >
${var1 + ': ' + var2}
- expression3: |
${var1 + ': ' + var2}
Maps¶
To use a map where an expression is allowed use the default property.
Example: passing a map using the default property
- map1:
default: {a: 1, b: c}
- map2:
default: {'a key': 1, b: c}
It is also possible to use two sets of quotes and an expression marker, but the approach detailed above is the recommended one.
Example: passing a map using the expression marker and quotes
- map3: "${{'a key': 1, 'b': 'c'}}"
- map4: >
${{'a key': 1, 'b': 'c'}}
Contexts¶
Throughout the execution of a flow, its steps, operations, decisions and subflows there are different variable contexts that are accessible. Which contexts are accessible depends on the current section of the flow, operation or decision.
The table below summarizes the accessible contexts at any given location in a flow, operation or decision.
Contexts/
Location
|
Context
Passed To
Executable
|
Flow
Context
|
Operation/
Decision
Context
|
Action
Outputs
Context
|
Subflow/
Operation/
Outputs
Context
|
Step
Arguments
|
Branched
Step
Output
Values
|
Already
Bound
Values
|
---|---|---|---|---|---|---|---|---|
flow
inputs
|
Yes | Yes | ||||||
flow
outputs
|
Yes | Yes | ||||||
operation/
decision
inputs
|
Yes | Yes | ||||||
operation/
decision
outputs
|
Yes | Yes | Yes | |||||
operation/
decision
results
|
Yes | Yes | ||||||
step
arguments
|
Yes | Yes | ||||||
step
publish
|
Yes | Yes | Yes - using
branches_context
|
Yes | ||||
step
navigation
|
Yes | Yes | ||||||
action
inputs
|
Yes |
Keywords (A-Z)¶
branches_context¶
May appear in the publish section of a parallel step.
As branches of a parallel_loop complete, values that have
been output and the branch’s result get placed as a dictionary into the
branches_context
list. The list is therefore in the order the
branches have completed.
A specific value can be accessed using the index representing its branch’s place in the finishing order and the name of the variable or the branch_result key.
Example - retrieves the name variable from the first branch to finish
publish:
- first_name: ${branches_context[0]['name']}
More commonly, the branches_context
is used to aggregate the values
that have been published by all of the branches.
Example - aggregates name values into a list
publish:
- name_list: ${map(lambda x:str(x['name']), branches_context)}
branch_result¶
May appear in the publish section of a parallel step.
As branches of a parallel_loop complete, branch results get
placed into the branches_context list under the
branch_result
key.
Example - aggregates branch results
publish:
- branch_results_list: ${map(lambda x:str(x['branch_result']), branches_context)}
break¶
The key break
is a property of a loop. It is mapped to a
list of results on which to break out of the loop or an empty list
([]
) to override the default breaking behavior for a list. When the
operation or subflow of the iterative
step returns a result in the break’s list, the
iteration halts and the iterative step’s
navigation logic is run.
If the break
property is not defined, the loop will break on results
of FAILURE
by default. This behavior may be overriden so that
iteration will continue even when a result of FAILURE
is returned by
defining alternate break behavior or mapping the break
key to an
empty list ([]
).
Example - loop that breaks on result of CUSTOM
loop:
for: value in range(1,7)
do:
custom_op:
- text: ${str(value)}
break:
- CUSTOM
navigate:
- CUSTOM: print_end
Example - loop that continues even on result of FAILURE
loop:
for: value in range(1,7)
do:
custom_op:
- text: ${str(value)}
break: []
class_name¶
The key class_name
is a property of a java_action. It is
mapped to the name of the Java class where an annotated @Action resides.
decision¶
The key decision
is mapped to the properties which make up the
decision contents.
Property | Required | Default | Value Type | Description | More Info |
---|---|---|---|---|---|
name |
yes | – | string | name of the
decision
|
name |
inputs |
no | – | list | decision inputs | inputs |
outputs |
no | – | list | decision outputs | outputs |
results |
yes | – | list | possible decision
results
|
results |
Example - decision that compares two values
decision:
name: compare
inputs:
- x
- y
outputs:
- sum: ${str(int(x) + int(y))}
results:
- EQUAL: ${x == y}
- LESS_THAN: ${int(x) < int(y)}
- GREATER_THAN
default¶
The key default
is a property of an input name. It is
mapped to an expression value.
The expression’s value will be passed to the flow
operation or decision if no other value for
that input parameter is explicitly passed or if the input’s
private parameter is set to true
. Passing an empty string
(''
), null
, or an expression that evaluates to None
is the same as
not passing any value at all and will not override the default value.
Example - default values
inputs:
- str_literal:
default: "default value"
- int_exp:
default: ${str(5 + 6)}
- from_variable:
default: ${variable_name}
- from_system_property:
default: $ { get_sp('system.property.key') }
A default value can also be defined inline by entering it as the value mapped to the input parameter’s key.
Example - inline default values
inputs:
- str_literal: "default value"
- int_exp: ${str(5 + 6)}
- from_variable: ${variable_name}
- from_system_property: $ { get_sp('system.property.key') }
do¶
The key do
is a property of a step name, a
loop, or a parallel_loop. It is mapped to a
property that references an operation or
flow.
Calls an operation or flow and passes in relevant arguments.
The operation or flow may be called in several ways:
- by referencing the operation or flow by name when it is in the default namespace (the same namespace as the calling flow)
- by using a fully qualified name, for example,
path.to.operation.op_name
- a path is recognized as a fully qualified name if the prefix
(before the first
.
) is not a defined alias
- a path is recognized as a fully qualified name if the prefix
(before the first
- by using an alias defined in the flow’s imports
section followed by the operation or
flow name (e.g
alias_name.op_name
) - by using an alias defined in the flow’s imports
section followed by a continuation of the path to the
operation or flow and its name (e.g
alias_name.path.cont.op_name
)
For more information, see the Operation Paths example.
Arguments are passed to a step using a list of argument names and
optional mapped expressions. The step must pass values for
all inputs found in the called operation,
decision or subflow that are required and don’t have
a default value. Passing an empty string (''
), null
, or an expression
that evaluates to None
is the same as not passing any value at all.
Argument names should be different than the output names found in the operation, decision or subflow being called in the step.
Argument names must conform to the rules for valid variable names.
An argument name without an expression will take its value from a variable with the same name in the flow context. Expression values will supersede values bound to flow inputs with the same name. To force the operation, decision or subflow being called to use it’s own default value, as opposed to a value passed in via expression or the flow context, omit the variable from the calling step’s argument list.
For a list of which contexts are available in the arguments section of a step, see Contexts.
Example - call to a divide operation with list of mapped step arguments
do:
divide:
- dividend: ${input1}
- divisor: ${input2}
Example - force an operation to use default value for punctuation input
flow:
name: flow
inputs:
- punctuation: "!"
workflow:
- step1:
do:
punc_printer:
- text: "some text"
#- punctuation
#commenting out the above line forces the operation to use its default value (".")
#leaving it in would cause the operation to take the value from the flow context ("!")
operation:
name: operation
inputs:
- text
- punctuation: "."
python_action:
script: |
print text + punctuation
extensions¶
The key extensions
is mapped to information that the compiler will ignore
and can therefore be used for various purposes.
Example - a flow that contains an extensions section
namespace: examples.extensions
flow:
name: flow_with_extensions_tag
workflow:
- noop_step:
do:
noop: []
extensions:
- some_key:
a: b
c: d
- another
flow¶
The key flow
is mapped to the properties which make up the flow
contents.
A flow is the basic executable unit of CloudSlang. A flow can run on its own or it can be used by another flow in the do property of a step.
Property | Required | Default | Value Type | Description | More Info |
---|---|---|---|---|---|
name |
yes | – | string | name of the flow | name |
inputs |
no | – | list | inputs for the flow | inputs |
workflow |
yes | – | list of steps | container for
workflow steps
|
workflow |
outputs |
no | – | list | list of outputs | outputs |
results |
no | (
SUCCESS /FAILURE ) |
list | possible results
of the flow
|
results |
Example - a flow that performs a division of two numbers
flow:
name: division
inputs:
- input1
- input2
workflow:
- divider:
do:
divide:
- dividend: ${input1}
- divisor: ${input2}
publish:
- answer: ${quotient}
navigate:
- ILLEGAL: ILLEGAL
- SUCCESS: printer
- printer:
do:
print:
- text: ${input1 + "/" + input2 + " = " + answer}
navigate:
- SUCCESS: SUCCESS
outputs:
- quotient: ${answer}
results:
- ILLEGAL
- SUCCESS
for¶
The key for
is a property of a loop or an
parallel_loop.
loop: for¶
A for loop iterates through a list or a map.
The iterative step will run once for each element in the list or key in the map.
Loop variables must conform to the rules for valid variable names.
When iterating through a list, the for
key is mapped to an iteration
variable followed by in
followed by a list, an expression that
evaluates to a list, or a comma delimited string.
Example - loop that iterates through the values in a list
- print_values:
loop:
for: value in [1,2,3]
do:
print:
- text: ${str(value)}
Example - loop that iterates through the values in a comma delimited string
- print_values:
loop:
for: value in "1,2,3"
do:
print:
- text: ${value}
Example - loop that iterates through the values returned from an expression
- print_values:
loop:
for: value in range(1,4)
do:
print:
- text: ${str(value)}
When iterating through a map, the for
key is mapped to iteration
variables for the key and value followed by in
followed by a map or
an expression that evaluates to a map.
Example - step that iterates through the values returned from an expression
- print_values:
loop:
for: k, v in map
do:
print2:
- text1: ${k}
- text2: ${v}
parallel_loop: for¶
A parallel for loop loops in parallel branches over the items in a list.
The parallel step will run one branch for each element in the list.
The for
key is mapped to an iteration variable followed by in
followed by a list or an expression that evaluates to a list.
Example - step that loops in parallel through the values in a list
- print_values:
parallel_loop:
for: value in values_list
do:
print_branch:
- ID: ${value}
gav¶
The key gav
is a property of a java_action. It is
mapped to the group:artifact:version
of the Maven project in which an
annotated Java @Action resides.
Upon operation execution, the Maven project and all the
required resources specified in its pom’s dependencies
will be resolved and
downloaded (if necessary).
Example - referencing Maven artifact using gav
java_action:
gav: io.cloudslang.content:cs-xml:0.0.2
class_name: io.cloudslang.content.mail.actions.SendMailAction
method_name: execute
imports¶
The key imports
is mapped to the files to import as follows:
- key - alias
- value - namespace of file to be imported
Specifies the file’s dependencies, operations and subflows, by the namespace defined in their source file and the aliases they will be referenced by in the file.
Using an alias is one way to reference the operations and subflows used in a flow’s steps. For all the ways to reference operations and subflows used in a flow’s steps, see the do keyword and the Operation Paths example.
Import aliases must conform to the rules for valid names.
Example - import operations and sublflow into flow
imports:
ops: examples.utils
subs: examples.subflows
flow:
name: hello_flow
workflow:
- print_hi:
do:
ops.print:
- text: "Hi"
- run_subflow:
do:
subs.division:
- input1: "5"
- input2: "3"
In this example, the ops
alias refers to the `examples.utils
namespace.
This alias is used in the print_hi
step to refer to the print
operation,
whose source file defines its namespace as examples.utils
. Similarly, the
subs
alias refers to the examples.subflows
namespace. The subs
alias
is used in the run_subflow
step to refer to the division
subflow, whose
source file defines its namespace as examples.subflows
.
inputs¶
The key inputs
is a property of a flow,
operation or decision. It is mapped to a list
of input names. Each input name may in turn be mapped to its properties or an
input expression.
Inputs are used to pass parameters to flows, operations or decisions. Input names for a specific flow, operation or decision must be different than the output names of the same flow, operation or decision.
Input values must evaluate to type string.
For a list of which contexts are available in the inputs
section of a
flow, operation or decision, see
Contexts.
Input names must conform to the rules for valid variable names and uniqueness.
Property | Required | Default | Value Type | Description | More info |
---|---|---|---|---|---|
required |
no | true | boolean | is the input
required
|
required |
default |
no | – | expression | default value
of the input
|
default |
private |
no | false | boolean | if true, the
default value
always overrides
values passed in
|
private |
sensitive |
no | transitive
sensitivity
or false
|
boolean | is the input
sensitive
|
sensitive |
Example - several inputs
inputs:
- input1:
default: "default value"
private: true
- input2
- input3: "default value"
- input4: ${'input1 is ' + input1}
- password:
sensitive: true
java_action¶
The key java_action
is a property of an operation. It is
mapped to the properties that define where an annotated Java @Action resides.
Property | Required | Default | Value Type | Description | More info |
---|---|---|---|---|---|
gav |
yes | – | string | group:artifact:version | gav |
class_name |
yes | – | string | fully qualified
Java class name
|
class_name |
method_name |
no | – | string | Java method name | method_name |
Example - CloudSlang call to a Java action
namespace: io.cloudslang.base.mail
operation:
name: send_mail
inputs:
- hostname
- port
- from
- to
- subject
- body
java_action:
gav: io.cloudslang.content:cs-xml:0.0.2
class_name: io.cloudslang.content.mail.actions.SendMailAction
method_name: execute
results:
- SUCCESS: ${ returnCode == '0' }
- FAILURE
Existing Java Actions¶
There are many existing Java actions which are bundled with the CloudSlang CLI. The source code for these Java actions can be found in the cs-actions repository.
Adding a New Java Action¶
To add a new Java action:
Create a Java method that conforms to the signature
public Map<String, String> doSomething(paramaters)
and use the following
annotations from com.hp.oo.sdk.content.annotations
:
@Action: specifies action information
- name: name of the action
- outputs: array of
@Output
annotations- responses: array of
@Response
annotations@Output: action output name
@Response: action response
- text: name of the response
- field: result to be checked
- value: value to check against
- matchType: type of check
- responseType: type of response
- isDefault: whether or not response is the default response
- isOnFail: whether or not response is the failure response
@Param: action parameter
- value: name of the parameter
- required: whether or not the parameter is required
Values are passed to a Java action from an operation using CloudSlang inputs that match the annotated parameters.
Values are passed back from the Java action to an operation using the returned
Map<String, String>
, where the map’s elements each correspond to a name:value
that matches a CloudSlang output.
Example - Java action
package com.example.content.actions;
import com.hp.oo.sdk.content.annotations.Action;
import com.hp.oo.sdk.content.annotations.Output;
import com.hp.oo.sdk.content.annotations.Param;
import com.hp.oo.sdk.content.annotations.Response;
import com.hp.oo.sdk.content.plugin.ActionMetadata.MatchType;
import java.util.Map;
import java.util.HashMap;
public class SaySomething {
@Action(name = "Example Test Action",
outputs = {
@Output("message")
},
responses = {
@Response(text = "success", field = "message", value = "fail", matchType = MatchType.COMPARE_NOT_EQUAL),
@Response(text = "failure", field = "message", value = "fail", matchType = MatchType.COMPARE_EQUAL, isDefault = true, isOnFail = true)
}
)
public Map<String, String> speak(@Param(value = "text", required = true) String text) {
Map<String, String> results = new HashMap<>();
System.out.println("I say " + text);
results.put("message", text);
return results;
}
}
Use Maven to package the project containing the Java action method and release it to the remote repository defined in the CLI’s configuration file.
Below is an example pom.xml file that can be used for your Maven project.
Example - sample pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example.content</groupId>
<artifactId>action-example</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>${project.groupId}:${project.artifactId}</name>
<description>Test Java action</description>
<dependencies>
<dependency>
<groupId>com.hp.score.sdk</groupId>
<artifactId>score-content-sdk</artifactId>
<version>1.10.6</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
Reference your Maven artifact using the gav key in the java_action section of your operation.
Upon the operation’s first execution, the Maven project and all
the required resources specified in its pom’s dependencies
will be resolved
and downloaded.
loop¶
The key loop
is a property of an iterative
step’s name. It is mapped to the iterative
step’s properties.
For each value in the loop’s list the do
will run an
operation or subflow. If the returned
result is in the break
list, or if break
does not appear and the
returned result is FAILURE
, or if the list has been exhausted, the
step’s navigation will run.
Property | Required | Default | Value Type | Description | More Info |
---|---|---|---|---|---|
for |
yes | – | variable in list |
iteration logic | for |
do |
yes | – | operation or
subflow call
|
the operation or
subflow this step
will run iteratively
|
|
publish |
no | – | list of
key:value pairs
|
operation or subflow
outputs to aggregate and
publish to the flow level
|
|
break |
no | – | list of results | break |
Example: loop that breaks on a result of custom
- custom3:
loop:
for: value in "1,2,3,4,5"
do:
custom3:
- text: ${value}
break:
- CUSTOM
navigate:
- CUSTOM: aggregate
- SUCCESS: skip_this
method_name¶
The key method_name
is a property of a java_action. It is
mapped to the name of the Java method where an annotated @Action resides.
name¶
The key name
is a property of flow,
operation or decision. It is mapped to a value
that is used as the name of the flow or operation.
The name of a flow, operation or decision may be used when called from a flow’s step.
The name of a flow, operation or decision must match the name of the file in which it resides, excluding the extension.
The name must conform to the rules for names and uniqueness.
Example - naming the flow found in the file division_flow.sl
name: division_flow
namespace¶
The key namespace
is mapped to a string value that defines the
file’s namespace.
The namespace of a file may be used by a flow to import dependencies.
Example - definition a namespace
namespace: examples.hello_world
Example - using a namespace in an imports definition
imports:
ops: examples.hello_world
Namespace values must conform to the rules described in Namespace Names. For more information about choosing a file’s namespace, see the CloudSlang Content Best Practices section.
Note
If the imported file resides in a folder that is different
from the folder in which the importing file resides, the imported file’s
directory must be added using the --cp
flag when running from the
CLI (see Run with Dependencies).
on_failure¶
The key on_failure
is a property of a workflow. It
is mapped to a step.
Defines the step, which when using default
navigation, is the target of a FAILURE
result returned from an operation or
flow. The on_failure
step can also be reached by
mapping one of a step’s navigation keys to
on_failure
. If a step’s navigation explicitly
maps to on_failure
, but there is no on_failure
step defined
in the flow, the flow ends with a result of FAILURE
.
The on_failure
step must be the last step in the flow.
The on_failure
step cannot contain a navigation
section. It always causes the flow to end with a result of
FAILURE
.
Example - failure step which calls a print operation to print an error message
- on_failure:
- failure:
do:
print:
- text: ${error_msg}
Example - explicitly navigating to the on_failure step
- go_to_failure:
do:
some_operation:
- input1
navigate:
- SUCCESS: SUCCESS
- FAILURE: on_failure
operation¶
The key operation
is mapped to the properties which make up the
operation contents.
Property | Required | Default | Value Type | Description | More Info |
---|---|---|---|---|---|
name |
yes | – | string | name of the
operation
|
name |
inputs |
no | – | list | operation inputs | inputs |
python_action |
no | – | script key |
operation logic | python_action |
java_action |
map | operation logic | java_action | ||
outputs |
no | – | list | operation outputs | outputs |
results |
no | SUCCESS |
list | possible operation
results
|
results |
Example - operation that adds two inputs and outputs the answer
operation:
name: add
inputs:
- left
- right
python_action:
script: ans = int(left) + int(right)
outputs:
- out: ${str(ans)}
results:
- SUCCESS
outputs¶
The key outputs
is a property of a flow,
operation or decision. It is mapped to a list
of output variable names. Each output name may in turn be mapped to its
properties or an output expression. Output
expressions must evaluate to strings.
Defines the parameters a flow, operation or decision exposes to possible publication by a step. The calling step refers to an output by its name.
Output names for a specific flow, operation or decision must be different than the input names of the same flow, operation or decision.
Output values must evaluate to type string.
For a list of which contexts are available in the outputs
section of a
flow, operation or decision,
see Contexts.
Output identifiers must conform to the rules for valid variable names and uniqueness.
Property | Required | Default | Value Type | Description | More info |
---|---|---|---|---|---|
value |
no | – | expression | value of
the output
|
value |
sensitive |
no | transitive
sensitivity
or false
|
boolean | is the output
sensitive
|
sensitive |
Example - various types of outputs
outputs:
- existing_variable
- output2: ${some_variable}
- output3: ${str(5 + 6)}
- password:
value: ${password}
sensitive: true
parallel_loop¶
The key parallel_loop
is a property of a parallel
step’s name. It is mapped to the parallel
step’s properties.
For each value in the loop’s list a branch is created and the do
will run an operation or subflow. When all
the branches have finished, the parallel
step’s publish and
navigation will run.
Property | Required | Default | Value Type | Description | More Info |
---|---|---|---|---|---|
for |
yes | – | variable
in list
|
loop values | for |
do |
yes | – | operation or
subflow call
|
operation or subflow
this step will
run in parallel
|
Example: loop that breaks on a result of custom
- print_values:
parallel_loop:
for: value in values
do:
print_branch:
- ID: ${value}
publish:
- name_list: ${map(lambda x:str(x['name']), branches_context)}
navigate:
- SUCCESS: print_list
- FAILURE: FAILURE
private¶
The key private
is a property of an input name. It
is mapped to a boolean value.
A value of true
will ensure that the input
parameter’s default value will not be overridden by
values passed into the flow, operation or
decision. An input set as private: true
must
also declare a default value. If private
is not defined,
values passed in will override the default value.
Example - default value of text input parameter will not be overridden by values passed in
inputs:
- text:
default: "default text"
private: true
properties¶
The key properties
is mapped to a list of key:value
pairs that define
one or more system properties. Each system property name may in turn be mapped
to its properties or a value.
System property names (keys) can contain alphanumeric characters (A-Za-z0-9), underscores (_) and hyphens (-). The names must conform to the rules for uniqueness.
System property values are retrieved using the get_sp() function.
Note
System property values that are non-string types (numeric, list, map,
etc.) are converted to string representations. A system property may have a
value of null
.
Property | Required | Default | Value Type | Description | More info |
---|---|---|---|---|---|
value |
no | – | value of
the property
|
value | |
sensitive |
no | false | boolean | is the property
sensitive
|
sensitive |
Example - system properties file
namespace: examples.sysprops
properties:
- host: 'localhost'
- port: 8080
- password:
value: 'pwd'
sensitive: true
An empty system properties file can be defined using an empty list.
Example - empty system properties file
namespace: examples.sysprops
properties: []
publish¶
The key publish
is a property of a step name, a
loop or a parallel_loop. It is mapped to a
list of key:value pairs where the key is the published variable name and
the value is an expression, usually involving an output received
from an operation or flow.
For a list of which contexts are available in the publish
section of a
step, see Contexts.
Publish names must conform to the rules for valid variable names and uniqueness.
Standard publish¶
In a standard step, publish
binds an
expression, usually involving an
output from an operation or
flow, to a variable whose scope is the current
flow and can therefore be used by other steps or
as the flow’s own output.
Example - publish the quotient output as ans
- division1:
do:
division:
- input1: ${dividend1}
- input2: ${divisor1}
publish:
- ans: ${quotient}
Iterative publish¶
In an iterative step the publish mechanism is run during each iteration after the operation or subflow has completed, therefore allowing for aggregation.
Example - publishing in an iterative step to aggregate output: add the squares of values in a range
- aggregate:
loop:
for: value in range(1,6)
do:
square:
- to_square: ${str(value)}
- sum
publish:
- sum: ${str(int(sum) + int(squared))}
Parallel publish¶
In a parallel step the publish mechanism defines the step’s aggregation logic, generally making use of the branches_context construct.
After all branches of a parallel step have
completed, execution of the flow continues with the publish
section. The
expression of each name:value pair is evaluated and published to the
flow’s scope. The expression generally makes use of the
branches_context construct to access the values
published by each of the parallel loop’s branches and their
results using the branch_result key.
For a list of which contexts are available in the publish
section of a
step, see Contexts.
For more information, see the Parallel Loop example.
Example - publishing in an parallel step to aggregate output
- print_values:
parallel_loop:
for: value in values_list
do:
print_branch:
- ID: ${value}
publish:
- name_list: ${map(lambda x:str(x['name']), branches_context)}
Example - extracting information from a specific branch
- print_values:
parallel_loop:
for: value in values_list
do:
print_branch:
- ID: ${value}
publish:
- first_name: ${branches_context[0]['name']}
Example - create a list of branch results
- print_values:
parallel_loop:
for: value in values
do:
print_branch:
- ID: ${ value }
publish:
- branch_results_list: ${map(lambda x:str(x['branch_result']), branches_context)}
python_action¶
The key python_action
is a property of an operation. It is
mapped to a script property that contains the actual Python script.
results¶
The key results
is a property of a flow,
operation or decision.
The results of a flow, operation or decision can be used by the calling step for navigation purposes.
A result name must conform to the rules for names and
uniqueness. Additionally, a result
cannot be named on_failure
.
Note
The only results of an operation, decision
or subflow called in a parallel_loop that
are evaluated are SUCCESS
and FAILURE
. Any other results will be
evaluated as SUCCESS
.
Flow Results¶
In a flow, the key results
is mapped to a list of result
names.
Defines the possible results of the flow. By default a
flow has two results, SUCCESS
and FAILURE
. The
defaults can be overridden with any number of user-defined results.
When overriding, the defaults are lost and must be redefined if they are to be used.
All result possibilities must be listed. When being used as a subflow all flow results must be handled by the calling step.
Example - a user-defined result
results:
- SUCCESS
- ILLEGAL
- FAILURE
Operation and Decision Results¶
In an operation or decision the key results
is mapped to a list of key:value pairs of result names and boolean
expressions.
Defines the possible results of the operation or
decision. By default, if no results exist, the result of an
operation is SUCCESS
. A decision does not
have any default results.
The first result in the list whose expression evaluates to true, or does not have an expression at all, will be passed back to the calling step to be used for navigation purposes.
If results are present, the list must include exactly one default ending
result which is not mapped to anything (- result
) or is mapped to the
value true
(- result: true
).
All operation or decision results must be handled by the calling step.
For a list of which contexts are available in the results
section of an
operation or decision, see
Contexts.
Example - three user-defined results
results:
- POSITIVE: ${polarity == '+'}
- NEGATIVE: ${polarity == '-'}
- NEUTRAL
required¶
The key required
is a property of an input name. It is
mapped to a boolean value.
A value of false
will allow the flow or
operation to be called without passing the
input parameter. If required
is not defined, the
input parameter defaults to being required.
Required inputs must receive a value or declare a default value.
Passing an empty string (''
), null
, or an expression that evaluates to
None
to a required input is the same as not passing any value at all.
Example - input2 is optional
inputs:
- input1
- input2:
required: false
script¶
The key script
is a property of python_action.
It is mapped to a value containing a Python script.
All variables in scope at the conclusion of the Python script must be
serializable. If non-serializable variables are used, remove them from
scope by using the del
keyword before the script exits.
Note
CloudSlang uses the Jython implementation of Python 2.7. For information on Jython’s limitations, see the Jython FAQ.
Example - action with Python script that divides two numbers
name: divide
inputs:
- dividend
- divisor
python_action:
script: |
if divisor == '0':
quotient = 'division by zero error'
else:
quotient = float(dividend) / float(divisor)
outputs:
- quotient: ${str(quotient)}
results:
- ILLEGAL: ${quotient == 'division by zero error'}
- SUCCESS
Note
Single-line Python scripts can be written inline with the
script
key. Multi-line Python scripts can use the YAML pipe
(|
) indicator as in the example above.
Importing External Python Packages¶
There are three approaches to importing and using external Python modules:
- Installing packages into the python-lib folder
- Editing the executable file
- Adding the package location to
sys.path
Prerequisites: Python 2.7 and pip.
You can download Python (version 2.7) from here. Python 2.7.9 and later include pip by default. If you already have Python but don’t have pip, see the pip documentation for installation instructions.
Edit the requirements.txt file in the python-lib folder, which is found at the same level as the bin folder that contains the CLI executable.
- If not using a pre-built CLI, you may have to create the python-lib folder and requirements.txt file.
Enter the Python package and all its dependencies in the requirements file.
- See the pip documentation for information on how to format the requirements file (see example below).
Run the following command from inside the python-lib folder:
pip install -r requirements.txt -t
.Note
If your machine is behind a proxy you will need to specify the proxy using pip’s
--proxy
flag.Import the package as you normally would in Python from within the action’s
script
:
python_action:
script: |
from pyfiglet import Figlet
f = Figlet(font='slant')
print f.renderText(text)
Example - requirements file
pyfiglet == 0.7.2
setuptools
Note
If you have defined a JYTHONPATH
environment variable, you
will need to add the python-lib folder’s path to its value.
- Open the executable found in the bin folder for editing.
- Change the
Dpython.path
key’s value to the desired path. - Import the package as you normally would in Python from within the
action’s
script
.
- In the action’s Pyton script, import the
sys
module. - Use
sys.path.append()
to add the path to the desired module. - Import the module and use it.
Example - takes path as input parameter, adds it to sys.path and imports desired module
inputs:
- path
python_action:
script: |
import sys
sys.path.append(path)
import module_to_import
print module_to_import.something()
Importing Python Scripts¶
To import a Python script in a python_action
:
- Add the Python script to the python-lib folder, which is found at the same level as the bin folder that contains the CLI executable.
- Import the script as you normally would in Python from within the
action’s
script
.
Note
If you have defined a JYTHONPATH
environment variable, you
will need to add the python-lib folder’s path to its value.
sensitive¶
The key sensitive
is a property of an input,
output or system property name. It is mapped to
a boolean value.
The sensitivity of an input or output is
transitive, and is therefore determined by its sensitive
property and by the
sensitivity of variables used in its related value expression.
Values that are sensitive
will not be printed in logs, events or in outputs
of the CLI and Build Tool.
Example - two sensitive inputs
inputs:
- input1:
default: "default value"
sensitive: true
- input1plus:
default: ${ get("input1") + "something else" }
Example - two sensitive outputs
outputs:
- output1:
value: ${output1}
sensitive: true
- output2: ${already_sensitive_value}
Example - a sensitive system property
properties:
- password:
value: 'pwd'
sensitive: true
step¶
A name of a step which is a property of workflow.
A step name must conform to the rules for names and
uniqueness. Additionally, a step cannot
be named on_failure
.
Every step which is not declared with the on_failure keyword must be reachable from another step.
There are several types of steps:
Example - step with two arguments, one of which contains a default value
- divider:
do:
some_op:
- host
- port: '25'
Standard Step¶
A standard step calls an operation or subflow once.
The step name is mapped to the step’s properties.
Property | Required | Default | Value Type | Description | More Info |
---|---|---|---|---|---|
do |
yes | – | operation
or subflow
call
|
the operation or
subflow this step
will run
|
|
publish |
no | – | list of
key:value
pairs
|
operation outputs
to publish to the
flow level
|
|
navigate |
no | FAILURE : on_failureor flow finish
SUCCESS : next step |
list of
key:value
pairs
|
navigation logic
from operation or
flow results
|
Example - step that performs a division of two inputs, publishes the answer and navigates accordingly
- divider:
do:
divide:
- dividend: ${input1}
- divisor: ${input2}
publish:
- answer: ${quotient}
navigate:
- ILLEGAL: FAILURE
- SUCCESS: printer
Iterative Step¶
An iterative step calls an operation or subflow iteratively, for each value in a list.
The step name is mapped to the iterative step’s properties.
Property | Required | Default | Value Type | Description | More Info |
---|---|---|---|---|---|
loop |
yes | – | key | container for
loop properties
|
for |
navigate |
no | FAILURE :on_failure
or flow finish
SUCCESS :next step
|
key:value
pairs
|
Example - step prints all the values in value_list and then navigates to a step named “another_step”
- print_values:
loop:
for: value in value_list
do:
print:
- text: ${value}
navigate:
- SUCCESS: another_step
- FAILURE: FAILURE
Parallel Step¶
A parallel step calls an operation or subflow in parallel branches, for each value in a list.
The step name is mapped to the parallel step’s properties.
Property | Required | Default | Value Type | Description | More Info |
---|---|---|---|---|---|
parallel_loop |
yes | – | key | container for
parallel loop
properties
|
parallel_loop |
publish |
no | – | list of
key:values
|
values to
aggregate from
parallel branches
loop properties
|
publish |
navigate |
no | FAILURE : on_failureor flow finish
SUCCESS : next step |
key:value
pairs
|
navigation logic |
Example - step prints all the values in value_list in parallel and then navigates to a step named “another_step”
- print_values:
parallel_loop:
for: value in values_list
do:
print_branch:
- ID: ${value}
publish:
- name_list: ${map(lambda x:str(x['name']), branches_context)}
navigate:
- SUCCESS: another_step
- FAILURE: FAILURE
value¶
The key value
is a property of an output or
system property name. In an output, the key is
mapped to an expression value. In a
system property, the key is mapped to a valid
system property value.
The value key is most often used in conjunction with the sensitive key. Otherwise, an output or system property’s value can be defined inline by mapping it to the output or system property’s name.
Example - output values
outputs:
- password:
value: ${password}
sensitive: true
- another_output: ${op_output}
Example - system property values
properties:
- props.password:
value: 'pwd'
sensitive: true
- props.another_property: 'prop value'
workflow¶
The key workflow
is a property of a flow. It is mapped
to a list of the workflow’s steps.
Defines a container for the steps, their published variables and navigation logic.
The first step in the workflow is the starting
step of the flow. From there the flow continues sequentially
by default upon receiving results of SUCCESS
, to the
flow finish or to on_failure upon a
result of FAILURE
, or following whatever overriding
navigation logic that is present.
Propery | Required | Default | Value Type | Description | More Info |
---|---|---|---|---|---|
on_failure |
no | – | step | default navigation
target for
FAILURE |
Example - workflow that divides two numbers and prints them out if the division was legal
workflow:
- divider:
do:
divide:
- dividend: ${input1}
- divisor: ${input2}
publish:
- answer: ${quotient}
navigate:
- ILLEGAL: FAILURE
- SUCCESS: printer
- printer:
do:
print:
- text: ${input1 + "/" + input2 + " = " + answer}
Functions (A-Z)¶
check_empty()¶
May appear in the value of an input, output, publish or result expression.
The function in the form of check_empty(expression1, expression2)
returns
the value associated with expression1
if expression1
does not evaluate
to None
. If expression1
evaluates to None
the function returns the
value associated with expression2
.
Example - usage of check_empty to check operation output in a flow
flow:
name: flow
inputs:
- in1
workflow:
- step1:
do:
operation:
- in1
publish:
- pub1: ${check_empty(out1, 'x marks the spot')}
#if in1 was not 'x' then out1 is 'not x' and pub1 is therefore 'not x'
#if in1 was 'x' then out1 is None and pub1 is therefore 'x marks the spot'
outputs:
- pub1
operation:
name: operation
inputs:
- in1
python_action:
script: |
out1 = 'not x' if in1 != 'x' else None
outputs:
- out1
get()¶
May appear in the value of an input, output, publish or result expression.
The function in the form of get('key')
returns the value associated with
key
if the key is defined. If the key is undefined the function returns
None
.
The function in the form of get('key', 'default_value')
returns the
value associated with key
if the key is defined and its value is not
None
. If the key is undefined or its value is None
the function
returns the default_value
.
Example - usage of get function in inputs and outputs
inputs:
- input1:
required: false
- input1_safe:
default: ${get('input1', 'default_input')}
private: true
workflow:
- step1:
do:
print:
- text: ${input1_safe}
publish:
- some_output: ${get('output1', 'default_output')}
outputs:
- some_output
get_sp()¶
May appear in the value of an input, step argument, publish, output or result expression.
The function in the form of get_sp('key', 'default_value')
returns the
value associated with the system property named key
if the
key is defined and its value is not null
. If the key is undefined or its
value is null
the function returns the default_value
. The key
is the
fully qualified name of the system property, meaning the
namespace (if there is one) of the file in which it is found followed by a dot
.
and the name of the key.
System property values are always strings or null
. Values
of other types (numeric, list, map, etc.) are converted to string
representations.
System properties are not enforced at compile time. They are assigned at runtime.
Note
If multiple system properties files are being used and they contain a system property with the same fully qualified name, the property in the file that is loaded last will overwrite the others with the same name.
Example - system properties file
namespace: examples.sysprops
properties:
- host: 'localhost'
- port: 8080
Example - system properties used as input values
inputs:
- host: ${get_sp('examples.sysprops.hostname')}
- port: ${get_sp('examples.sysprops.port', '8080')}
To pass a system properties file to the CLI, see Run with System Properties.
Examples¶
The following simplified examples demonstrate some of the key CloudSlang concepts.
- Example 1 - User-defined Navigation and Publishing Outputs
- Example 2 - Default Navigation
- Example 3 - Subflow
- Example 4 - Loops
- Example 5 - Parallel Loop
- Example 6 - Operation Paths
Each of the examples below can be run by doing the following:
- Create a new folder.
- Create new CloudSlang(.sl) files and copy the code into them.
- Use the CLI to run the flow.
For more information on getting set up to run flows, see the CloudSlang CLI and Hello World Example sections.
Example 3 - Subflow¶
This example uses the flow from Example 1 as a subflow. It takes in
four numbers (or uses default ones) to call division_flow
twice. If
either division returns the ILLEGAL
result, navigation is routed to
the on_failure
step and the flow ends with a result of FAILURE
.
If both divisions are successful, the on_failure
step is skipped and
the flow ends with a result of SUCCESS
.
Note
To run this flow, the files from Example 1 should be
placed in the same folder as this flow file or use the --cp
flag at
the command line.
Flow - master_divider.sl
namespace: examples.divide
flow:
name: master_divider
inputs:
- dividend1: "3"
- divisor1: "2"
- dividend2: "1"
- divisor2: "0"
workflow:
- division1:
do:
division:
- input1: ${dividend1}
- input2: ${divisor1}
publish:
- ans: ${quotient}
navigate:
- SUCCESS: division2
- ILLEGAL: failure_step
- division2:
do:
division:
- input1: ${dividend2}
- input2: ${divisor2}
publish:
- ans: ${quotient}
navigate:
- SUCCESS: SUCCESS
- ILLEGAL: failure_step
- on_failure:
- failure_step:
do:
print:
- text: ${ans}
Example 4 - Loops¶
This example demonstrates the different types of values that can be looped on and various methods for handling loop breaks.
Flow - loops.sl
namespace: examples.loops
flow:
name: loops
inputs:
- sum:
default: '0'
private: true
workflow:
- fail3a:
loop:
for: value in [1,2,3,4,5]
do:
fail3:
- text: ${str(value)}
navigate:
- SUCCESS: fail3b
- FAILURE: fail3b
- fail3b:
loop:
for: value in [1,2,3,4,5]
do:
fail3:
- text: ${str(value)}
break: []
- custom3:
loop:
for: value in "1,2,3,4,5"
do:
custom3:
- text: ${value}
break:
- CUSTOM
navigate:
- CUSTOM: aggregate
- SUCCESS: skip_this
- skip_this:
do:
print:
- text: "This will not run."
navigate:
- SUCCESS: aggregate
- aggregate:
loop:
for: value in range(1,6)
do:
print:
- text: ${str(value)}
- sum
publish:
- sum: ${str(int(sum) + int(out))}
break: []
navigate:
- SUCCESS: print
- print:
do:
print:
- text: ${sum}
navigate:
- SUCCESS: SUCCESS
Operation - custom3.sl
namespace: examples.loops
operation:
name: custom3
inputs:
- text
python_action:
script: print text
results:
- CUSTOM: ${int(text) == 3}
- SUCCESS
Operation - print.sl
namespace: examples.loops
operation:
name: print
inputs:
- text
python_action:
script: print text
outputs:
- out: ${text}
results:
- SUCCESS
Example 5 - Parallel Loop¶
This example demonstrates the usage of a parallel loop including aggregation.
Flow - parallel_loop_aggregate.sl
namespace: examples.parallel
flow:
name: parallel_loop_aggregate
inputs:
- values: "1 2 3 4"
workflow:
- print_values:
parallel_loop:
for: value in values.split()
do:
print_branch:
- ID: ${str(value)}
publish:
- name_list: "${', '.join(map(lambda x : str(x['name']), branches_context))}"
- first_name: ${branches_context[0]['name']}
- last_name: ${branches_context[-1]['name']}
- total: "${str(sum(map(lambda x : int(x['num']), branches_context)))}"
navigate:
- SUCCESS: SUCCESS
outputs:
- name_list
- first_name
- last_name
- total
results:
- SUCCESS
Operation - print_branch.sl
namespace: examples.parallel
operation:
name: print_branch
inputs:
- ID
python_action:
script: |
name = 'branch ' + str(ID)
print 'Hello from ' + name
outputs:
- name
- num: ${ID}
Example 6 - Operation Paths¶
This example demonstrates the various ways to reference an operation or subflow from a flow step.
This example uses the following folder structure:
- examples
- paths
- flow.sl
- op1.sl
- folder_a
- op2.sl
- folder_b
- op3.sl
- folder_c
- op4.sl
- paths
Flow - flow.sl
namespace: examples.paths
imports:
alias: examples.paths.folder_b
flow:
name: flow
workflow:
- default_path:
do:
op1:
- text: "default path"
navigate:
- SUCCESS: fully_qualified_path
- fully_qualified_path:
do:
examples.paths.folder_a.op2:
- text: "fully qualified path"
navigate:
- SUCCESS: using_alias
- using_alias:
do:
alias.op3:
- text: "using alias"
navigate:
- SUCCESS: alias_continuation
- alias_continuation:
do:
alias.folder_c.op4:
- text: "alias continuation"
navigate:
- SUCCESS: SUCCESS
results:
- SUCCESS
Operation - op1.sl
namespace: examples.paths
operation:
name: op1
inputs:
- text
python_action:
script: print text
Operation - op2.sl
namespace: examples.paths.folder_a
operation:
name: op2
inputs:
- text
python_action:
script: print text
Operation - op3.sl
namespace: examples.paths.folder_b
operation:
name: op3
inputs:
- text
python_action:
script: print text
Operation - op4.sl
namespace: examples.paths.folder_b.folder_c
operation:
name: op4
inputs:
- text
python_action:
script: print text
Tests¶
CloudSlang tests are written to test CloudSlang content and are run during the build process by the CloudSlang Build Tool.
Wrapper Flows¶
Test cases either test a flow or operation directly or use a wrapper flow that calls the flow or operation to be tested.
Wrapper flows are often used to set up an environment before the test runs and to clean up the environment after the test. They are also sometimes necessary for complex tests of a flow or operation’s outputs.
Wrapper flows are written in CloudSlang using the .sl extension and use the normal flow syntax.
Test Suites¶
Test suites are groups of tests that are only run if the build declares them as active. Test suites are often used to group tests that require a certain environment that may or may not be present in order to run. When the environment is present the suite can be activated and when it is not present the tests will not run.
Tests declare which test suites they are a part of, if any, using the
testSuites
property.
If no test suites are defined for a given test case, the test will run
unless !default
is passed to the CloudSlang Build Tool.
Note
When using Linux, the exclamation mark (!
) needs to be
escaped with a preceding backslash (\
). So, to ignore default tests,
pass \!default
to the CloudSlang Build Tool.
Test Case Syntax¶
CloudSlang test files are written in YAML with the .inputs.yaml extension and contain one or more test cases.
Each test case begins with a unique key that is the test case name. The name is mapped to the following test case properties:
Property | Required | Value Type | Description |
---|---|---|---|
inputs |
no | list of
key:value
pairs
|
inputs to pass to the flow
or operation being tested
|
systemPropertiesFile |
no | string | path to the system properties file
for the flow or operation -
${project_path} can be used for specifyinga path relative to the project path,
for example, systemPropertiesFile:
${project_path}\content\base\properties.yaml |
description |
no | string | description of test case |
testFlowPath |
yes | string | qualified name of the flow,
operation or wrapper flow to test
|
testSuites |
no | list | list of suites this test belongs to |
outputs |
no | list of
key:value
pairs
|
expected output values of the flow,
operation or wrapper flow being tested
|
result |
no | flow or
operation
result
|
expected result of the flow,
operation or wrapper flow being tested
|
throwsException |
no | boolean | whether or not to expect an exception |
Note
The outputs
parameter does not need to test all of a flow
or operation’s outputs.
Example - test cases that test the match_regex operation
testMatchRegexMatch:
inputs:
- regex: 'a+b'
- text: aaabc
description: Tests that match_regex.sl operation finishes with MATCH for specified regex/text
testFlowPath: io.cloudslang.base.strings.match_regex
outputs:
- match_text: 'aaab'
result: MATCH
testMatchRegexMissingInputs:
inputs:
- text: HELLO WORLD
description: Tests that match_regex.sl operation throws an exception when a required input is missing
testFlowPath: io.cloudslang.base.strings.match_regex
outputs:
- match_text: ''
throwsException: true
Run Tests¶
To run test cases use the CloudSlang Build Tool. Test cases are not run by the CloudSlang CLI.
Best Practices¶
The following is a list of best practices for authoring CloudSlang files. Many of these best practices are checked when using the CloudSlang Build Tool.
CloudSlang Content Best Practices¶
The namespace for a file matches the suffix of the file path in which the file resides, for example, the send_mail operation is found in the cloudslang-content/io/cloudslang/base folder, so it uses the namespace
io.cloudslang.base.mail
.Namespaces should be comprised of only lowercase alphanumeric characters (a-z and 0-9), underscores (_), periods(.) and hyphens (-).
A flow or operation has the same name as the file it is in.
Each file has one flow or one operation.
Flows and operations reside together in the same folders.
System properties files do not reside in the same folder together with flows and operations.
Steps call subflows or operations using their fully qualified name or an alias created in the
imports
section, even when the subflow or operation resides in the same folder as the calling flow.Identifiers (flow names, operation names, input names, etc.) are written:
- In snake_case, lowercase letters with underscores (_) between words, in all cases other than inputs to a Java @Action.
- In camelCase, starting with a lowercase letter and each additional word starting with an uppercase letter appended without a delimiter, for inputs to a Java @Action.
- Results are written in ALL_CAPS.
Assign only relevant default values. For example, 8080 is a good candidate for a port number, but john_doe is probably not a good candidate for a user name.
String values are written in one of the YAML quoted styles (
'
or"
) or block styles (|
or>
). For more information, see YAML Overview - Scalars.Flow and operation files begin with a commented description and list of annotated inputs, outputs and results (see CloudSlang Comments Style Guide).
- Optional parameters, default and valid values are noted.
- Examples are provided when useful.
CloudSlang Tests Best Practices¶
- Tests are contained in a directory with a folder structure identical to the structure of the directory they are testing.
- Tests for a particular CloudSlang file are written in a file with the same name, but with the .inputs.yaml extension, for example, the flow print_text.sl is tested by tests in print_text.inputs.yaml.
- Wrapper flows reside in the same folder as the tests call them.
- String values are written in one of the YAML quoted styles (
'
or"
) or block styles (|
or>
). For more information, see YAML Overview - Scalars.
Note
In future releases some of the above best practices may be required by the CloudSlang compiler.
CloudSlang Inputs Files Best Practices¶
- The name of an inputs file ends with .inputs.yaml.
CloudSlang Comments Style Guide¶
All CloundSlang flows and operations should begin with a documentation block that describes the flow or operation, and lists the inputs, outputs and results.
A flow or operation’s documentation may be viewed from the CLI using the inspect command.
Documentation blocks begin with a line containing
#!!
and nothing else.Documentation blocks end with a line containing
#!!#
and nothing else.Each line of the documentation begins with
#!
.Lines in the documentation block that do not begin with
#!
will not be considered as part of the documentation and will not display when the file is inspected.The
@description
tag is the only mandatory tag.The other possible tags are:
@prerequisites
@input <input_name>
@output <output_name>
@result <result_name>
####################################################
#!!
#! @description: Does something fantastic.
#!
#! @prerequisites: Some Python module.
#!
#! @input input_1: first input
#! @input input_2: second input
#! default: true
#! valid: true, false
#! @input input_3: third input
#! optional
#! example: 'someone@mailprovider.com'
#! @input input_4: fourth input
#! format: space delimited list of strings
#! @output output_1: first output
#! @result SUCCESS: good
#! @result FAILURE: bad
#!!#
####################################################
Description¶
- Written as a sentence, beginning with a capital letter and ending with a period.
- Written in the present tense, for example, “Prints text.”.
- Does not include “This flow” or “This operation” or anything similar.
Prerequisites¶
- Flows and operations that assume prerequisites should declare them.
Inputs, Outputs and Results¶
- Fields appear in the same order as they appear in the code.
- Description begins with a lowercase letter (unless a proper name or capitalized acronym) and does not end with a period.
- Usage of the words “the” and “a” are strongly discouraged, especially at the beginning of the description.
- Description does not include “this flow”, “this operation”, “this field” or anything similar.
- Proper names and acronyms that are normally capitalized are capitalized, for example, HTTP, Docker, ID.
Inputs and Outputs¶
- Written in the present tense, for example, “true if job exists”.
- Non-required fields contain the “optional” label.
- Additional labels are “default:”, “example:”, “valid:” and “format:”.
Results¶
- Actions written in the past tense, for example, “error occurred”. States written in the present tense, for example, “application is up”.
- Default results which do not require any additional explanation are omitted.
Recurring Fields¶
Fields that appear often with the same meaning should have the same name and description across flows and operations. However, if the meaning is specific to the flow or operation, the field description may be different. Some examples are:
- FAILURE - otherwise
- error_message - error message if error occurred
- command - command to execute
CLI¶
There are several ways to get started with the CloudSlang CLI.
Download and Run Pre-built CLI¶
Prerequisites : To run the CloudSlang CLI, Java JRE version 7 or higher is required.
- Download the CLI with content zip file.
- Locate the downloaded file and unzip the archive.
The decompressed file contains:
- A folder named cslang-cli with the CLI tool and its necessary dependencies.
- A folder named content with ready-made CloudSlang flows and operations.
- A folder named python-lib.
- Navigate to the folder
cslang-cli\bin\
. - Run the executable:
- For Windows :
cslang.bat
. - For Linux :
bash cslang
.
- For Windows :
Download, Build and Run CLI¶
Prerequisites : To build the CloudSlang CLI, Java JDK version 7 or higher and Maven version 3.0.3 or higher are required.
- Git clone (or GitHub fork and then clone) the source code.
- Using the Command Prompt, navigate to the project root directory.
- Build the project by running
mvn clean install
. - After the build finishes, navigate to the
cloudslang-cli\target\cloudslang\bin
folder. - Run the executable:
- For Windows :
cslang.bat
. - For Linux :
bash cslang
.
- For Windows :
Download and Install npm Package¶
Prerequisites : To download the package, Node.js is required. To run the CloudSlang CLI, Java JRE version 7 or higher is required.
- At a command prompt, enter
npm install -g cloudslang-cli
.- If using Linux, the sudo command might be neccessary:
sudo npm install -g cloudslang-cli
.
- If using Linux, the sudo command might be neccessary:
- Enter the
cslang
command at any command prompt.
Docker Image¶
There are two CloudSlang Docker images. One (cloudslang/cloudslang) is a lightweight image meant to get you running CloudSlang flows as quickly as possible. The other image (cloudslang/cloudslang-dev) adds the tools necessary to develop CloudSlang flows.
cloudslang/cloudslang¶
This image includes:
- Java
- CloudSlang CLI
- CloudSlang content
To get the image: docker pull cloudslang/cloudslang
To run a flow with a CloudSlang prompt:
docker run -it cloudslang/cloudslang
- At the prompt enter:
run --f ../content/io/cloudslang/.../flow.sl --i input1=value1
Or, to run the flow without the prompt:
docker run --rm cloudslang/cloudslang run --f ../content/io/cloudslang/.../flow.sl --i input1=value1
cloudslang/cloudslang-dev¶
This image includes:
- Java
- CloudSlang CLI
- CloudSlang content
- Python
- Pip
- Vim
- Emacs
- SSH
- Git
- Atom
- language-cloudslang Atom package
To get the image: docker pull cloudslang/cloudslang-dev
Configure the CLI¶
The CLI can be configured using the configuration file found at
cslang-cli/configuration/cslang.properties
.
Some of the configuration items are listed in the table below:
Configuration key | Default value | Description |
---|---|---|
log4j.configuration | file:${app.home}/configuration/logging/log4j.properties | Location of logging
configuration file
|
cslang.encoding | utf-8 | Character encoding
for input values
and input files
|
maven.home | ${app.home}/maven/apache-maven-x.y.z | Location of CloudSlang
Maven repository home
directory
|
maven.settings.xml.path | ${app.home}/maven/conf/settings.xml | Location of
Maven settings file
|
cloudslang.maven.repo.local | ${app.home}/maven/repo | Location of local
repository
|
cloudslang.maven.repo.remote.url | http://repo1.maven.org/maven2 | Location of remote
Maven repository
|
cloudslang.maven.plugins.remote.url | http://repo1.maven.org/maven2 | Location of remote
Maven plugins
|
Logging Configuration¶
The CLI’s logging can be configured using the logging configuration file. The location of the logging configuration file is defined in the CLI’s configuration file.
Maven Configuration¶
The CLI uses Maven to manage Java action dependencies. There are several
Maven configuration properties found in the CLI’s
configuration file. To configure Maven to use a remote
repository other than Maven Central, edit the values for
cloudslang.maven.repo.remote.url
and cloudslang.maven.plugins.remote.url
.
Additionally, you can edit the proxy settings in the file found
at maven.settings.xml.path
.
Maven Troubleshooting¶
It is possible that the CLI’s Maven repository can become corrupted. In such a
case, delete the entire repo folder found at the location indicated by the
cloudslang.maven.repo.local
key in the CLI’s configuration file and rerun the flow.
Use the CLI¶
When a flow is run, the entire directory in which the flow resides is
scanned recursively (including all subfolders) for files with a valid
CloudSlang extension. All of the files found are compiled by the CLI. If
the --cp
flag is used, all of the directories listed there will be
scanned and compiled recursively as well.
Note
Use forward slashes (/
) in all file paths, even on Windows, because
back slashes (\
) can be interpreted as special characters.
Run a Flow or Operation¶
To run a flow or operation located at c:/.../your_flow.sl
, use the
--f
flag to specify the location of the flow to be run:
cslang>run --f c:/.../your_flow.sl
Run with Inputs¶
From the Command Line¶
If the flow or operation takes in input parameters, use the --i
flag
and a comma-separated list of key=value pairs:
cslang>run --f c:/.../your_flow.sl --i input1=root,input2=25
Commas (,
) can be used as part of input values by escaping them with
a backslash (\
).
cslang>run --f c:/.../your_flow.sl --i list=1\,2\,3
To use inputs that include spaces, enclose the entire input list in
quotes ("
):
cslang>run --f c:/.../your_flow.sl --i "input1=Hello World, input2=x"
Double quotes ("
) can be used as part of quoted input values by
escaping them with a backslash (\
). When using a quoted input list,
spaces between input parameters will be trimmed.
To pass the value “Hello” World to a flow:
cslang>run --f c:/.../your_flow.sl --i "input1=\"Hello\" World"
Using an Inputs File¶
Alternatively, inputs made be loaded from a file. Inputs files are written in flat YAML, containing a map of names to values. Inputs files end with the .yaml or .yml extensions. It is a best practice for the name of an inputs file to end with .inputs.yaml. If multiple inputs files are being used and they contain an input with the same name, the input in the file that is loaded last will overwrite the others with the same name.
Inputs files can be loaded automatically if placed in a folder located at
cslang-cli/configuration/inputs
. If the flow requires an inputs file that is not
loaded automatically, use the --if
flag and a comma-separated list of file
paths. Inputs passed with the --i
flag will override the inputs passed using
a file.
Example - same inputs passed to flow using command line and inputs file
Inputs passed from the command line - run command
cslang>run --f C:/.../your_flow.sl --i "input1=simple text,input2=comma\, text,input3=\"quoted text\""
Inputs passed using an inputs file - run command
cslang>run --f C:/.../your_flow.sl --if C:/.../inputs.yaml
Inputs passed using an inputs file - inputs.yaml file
input1: simple text
input2: comma, text
input3: '"quoted text"'
Run with Dependencies¶
If the flow requires dependencies they can be added to the classpath using the
--cp
flag with a comma-separated list of dependency paths. If no cp
flag
is present, the cslang-cli/content folder is added to the classpath by default.
If there is no --cp
flag and no cslang-cli/content folder, the running flow
or operation’s folder is added to the classpath by default.
cslang>run --f c:/.../your_flow.sl --i input1=root,input2=25 --cp c:/.../yaml
Run with System Properties¶
A system properties file is a type of CloudSlang file that contains a list of system property keys and values. If multiple system properties files are being used and they contain a system property with the same fully qualified name, the property in the file that is loaded last will overwrite the others with the same name.
System property names (keys) can contain alphanumeric characters (A-Za-z0-9), underscores (_) and hyphens (-). For more information on the structure of system properties files see the CloudSlang Files and properties sections of the DSL Reference.
System property files can be loaded automatically if placed in a folder or
subfolder within cslang-cli/configuration/properties
. If the flow or operation
requires a system properties file that is not loaded automatically, use the
--spf
flag and a comma-separated list of file paths.
cslang>run --f c:/.../your_flow.sl --spf c:/.../yaml
Example - system properties file
namespace: examples.sysprops
properties:
- host: 'localhost'
- port: 8080
Note
System property values that are non-string types (numeric, list, map,
etc.) are converted to string representations. A system property may have a
value of null
.
An empty system properties file can be defined using an empty list.
Example: empty system properties file
namespace: examples.sysprops
properties: []
Run in Non-Interactive Mode¶
A flow can be run without first starting up the CLI using the non-interactive mode.
From a shell prompt:
Windows
>cslang.bat run --f c:/.../your_flow.sl
Linux
>cslang run --f c:/.../your_flow.sl
Change the Verbosity Level¶
The CLI can run flows and operations at several levels of verbosity.
To change the verbosity level, use the --v
flag.
Verbosity level | Printed to the screen | Syntax |
---|---|---|
default |
step names and top-level outputs | no flag or --v default |
quiet |
top-level outputs | --v quiet |
debug |
default + each step’s published variables | --v or --v debug |
Run in quiet mode:
cslang>run --f c:/.../your_flow.sl --v quiet
Run in debug mode:
cslang>run --f c:/.../your_flow.sl --v
Inspect a Flow or Operation¶
To view a flow or operation’s description, inputs, outputs and results use the
inspect
command.
cslang>inspect c:/.../your_flow.sl
List System Properties¶
To list the properties contained in a system properties file use the list
command.
cslang>list c:/.../your_properties.prop.sl
Other Commands¶
Some of the available commands are:
env --setAsync
- Sets the execution mode to be synchronous (false
) or asynchronous (true
). By default the execution mode is synchronous, meaning only one flow can run at a time.
cslang>env --setAsync true
inputs
- Lists the inputs of a given flow.
cslang>inputs --f c:/.../your_flow.sl
cslang --version
- Displays the version of the CLI being used.
cslang>cslang --version
Execution Log¶
The execution log is saved at cslang-cli/logs/execution.log
. The log file stores
all the events that have been fired, and
therefore allows for tracking a flow’s execution.
Maven Log¶
Log files of Maven activity are saved at cslang-cli/logs/maven/
. Each artifact’s
activity is stored in a file named with the convention
<group>_<artifact>_<version>.log
.
History¶
The CLI history is saved at cslang-cli/cslang-cli.history
.
Help¶
To get a list of available commands, enter help
at the CLI
cslang>
prompt. For further help, enter help
and the name of the
command.
Build Tool¶
The CloudSlang Build Tool checks the syntactic validity of CloudSlang files, their adherence to many of the best practices and runs their associated tests.
Running the CloudSlang Build Tool performs the following steps, each of which is written to the console:
- Displays the active project, content and test paths.
- Displays a list of the active test suites.
- Compiles all CloudSlang files found in the content directory and all
of its subfolders.
- If there is a compilation error, it is displayed and the build terminates.
- Compiles all CloudSlang test flows found in the test directory and all of its subfolders.
- Parses all test cases files found in the test directory and all of its subfolders.
- Runs all test cases found in the test case files that have no test suite or have a test suite that is active.
- Displays the test cases that were skipped.
- Reports the build’s status.
- If the build fails, a list of failed test cases are displayed.
Sample Builder Run
11:08:12 [INFO]
11:08:12 [INFO] ------------------------------------------------------------
11:08:12 [INFO] Building project: C:\CloudSlang\test_code\build_tool
11:08:12 [INFO] Content root is at: C:\CloudSlang\test_code\build_tool\content
11:08:12 [INFO] Test root is at: C:\CloudSlang\test_code\build_tool\test
11:08:12 [INFO] Active test suites are: [default]
11:08:12 [INFO] Validate description: true
11:08:12 [INFO]
11:08:12 [INFO] Loading...
11:08:17 [INFO]
11:08:17 [INFO] ------------------------------------------------------------
11:08:17 [INFO] Building project: build_tool
11:08:17 [INFO] ------------------------------------------------------------
11:08:17 [INFO]
11:08:17 [INFO] --- compiling sources ---
11:08:17 [INFO] Start compiling all slang files under: C:\CloudSlang\test_code\build_tool\content
11:08:17 [INFO] 1 .sl files were found
11:08:17 [INFO]
11:08:17 [INFO] Compiled: 'build_tool.content.operation' successfully
11:08:17 [INFO] Successfully finished Compilation of: 1 Slang files
11:08:17 [INFO]
11:08:17 [INFO] --- compiling tests sources ---
11:08:17 [INFO] Start compiling all slang files under: C:\CloudSlang\test_code\build_tool\test
11:08:17 [INFO] 0 .sl files were found
11:08:17 [INFO]
11:08:17 [INFO] Compiled: 'build_tool.content.operation' successfully
11:08:17 [INFO]
11:08:17 [INFO] --- parsing test cases ---
11:08:17 [INFO] Start parsing all test cases files under: C:\CloudSlang\test_code\build_tool\test
11:08:17 [INFO] 1 test cases files were found
11:08:17 [INFO]
11:08:17 [INFO] --- running tests ---
11:08:17 [INFO] Found 2 tests
11:08:17 [INFO] Running test: testOperationFailure - Tests that operation.sl finishes with FAILURE
11:08:23 [ERROR] Test case failed: testOperationFailure - Tests that operation.sl finishes with FAILURE
Expected result: FAILURE
Actual result: SUCCESS
11:08:23 [INFO] Running test: testOperationSuccess - Tests that operation.sl finishes with SUCCESS
11:08:23 [INFO] Test case passed: testOperationSuccess. Finished running: build_tool.content.operation with result: SUCCESS
11:08:23 [INFO] ------------------------------------------------------------
11:08:23 [INFO] Following 1 test cases passed:
11:08:23 [INFO] - testOperationSuccess
11:08:23 [INFO]
11:08:23 [INFO] ------------------------------------------------------------
11:08:23 [INFO] Following 1 executables have tests:
11:08:23 [INFO] - build_tool.content.operation
11:08:23 [INFO]
11:08:23 [INFO] ------------------------------------------------------------
11:08:23 [INFO] Following 0 executables do not have tests:
11:08:23 [INFO]
11:08:23 [INFO] ------------------------------------------------------------
11:08:23 [INFO] 100% of the content has tests
11:08:23 [INFO] Out of 1 executables, 1 executables have tests
11:08:23 [INFO] 1 test cases passed
11:08:23 [ERROR]
11:08:23 [ERROR] ------------------------------------------------------------
11:08:23 [ERROR] BUILD FAILURE
11:08:23 [ERROR] ------------------------------------------------------------
11:08:23 [ERROR] CloudSlang build for repository: "C:\CloudSlang\test_code\build_tool" failed due to failed tests.
11:08:23 [ERROR] Following 1 tests failed:
11:08:23 [ERROR] - Test case failed: testOperationFailure - Tests that operation.sl finishes with FAILURE
Expected result: FAILURE
Actual result: SUCCESS
11:08:23 [ERROR]
Configure the Build Tool¶
The Build Tool can be configured using the configuration file found at
cslang-builder/configuration/cslang.properties
.
Configuration key | Default value | Description |
---|---|---|
cslang.encoding | utf-8 | Character encoding
for input values
and input files
|
maven.home | ${app.home}/maven/apache-maven-x.y.z | Location of CloudSlang
Maven repository home
directory
|
maven.settings.xml.path | ${app.home}/maven/conf/settings.xml | Location of
Maven settings file
|
cloudslang.maven.repo.local | ${app.home}/maven/repo | Location of local
repository
|
cloudslang.maven.repo.remote.url | http://repo1.maven.org/maven2 | Location of remote
Maven repository
|
cloudslang.maven.plugins.remote.url | http://repo1.maven.org/maven2 | Location of remote
Maven plugins
|
cloudslang.test.case.report.location | ${app.home}/report | Location of test
case report
|
Maven Configuration¶
The Build Tool uses Maven to manage Java action dependencies. There are several
Maven configuration properties found in the Build Tool’s
configuration file. To configure Maven to use a remote
repository other than Maven Central, edit the values for
cloudslang.maven.repo.remote.url
and cloudslang.maven.plugins.remote.url
.
Additionally, you can edit the proxy settings in the file found at
maven.settings.xml.path
.
Maven Troubleshooting¶
It is possible that the Build Tool’s Maven repository can become corrupted. In
such a case, delete the entire repo folder found at the location indicated
by the cloudslang.maven.repo.local
key in the Build Tool’s
configuration file and rerun the builder.
Use the Build Tool¶
The CloudSlang Build Tool builds projects. A project consists of a folder that contains the CloudSlang content and a folder containing the tests for the content.
By default the build tool will look for a folder named content and a folder named test in the project folder to use as the content and test folders respectively. If they are present in the project folder, they do not have to be passed to the build tool.
To use the CloudSlang Build Tool with default settings, run the cslang-builder executable from the command line and pass the path to the project folder.
<builder path>\cslang-builder\bin>cslang-builder.bat <project path>
To use the CloudSlang Build Tool with specific settings, run the cslang-builder executable from the command line and pass the following arguments:
Argument | Default | Description |
---|---|---|
-pr | current folder | project root folder |
-cr | <project root>/content | content root folder |
-tr | <project root>/test | test root folder |
-ts | none | list of test suites to run - use
!default to skip tests that are not included in a test suite
|
-cov | off | whether or not test coverage data should be output |
-des | off | whether or not to validate that all inputs, outputs
and results have descriptions
|
-par | false | whether or not parallel test execution should be used |
-th | number of available
processors for
the machine
|
number of threads for parallel runs |
Dynamic Parameters
Parameter | Description |
---|---|
-Dtest.case.timeout.in.minutes | number of minutes to wait before test case timeout |
Note
To skip tests not included in a test suite when using Linux,
the exclamation mark (!
) needs to be escaped with a preceding
backslash (\
). So, to ignore default tests, pass \!default
.
Note
Test coverage is calculated as a percentage of flows and operations for which tests exist, regardless of how much of each flow or operation is covered by the test. Additionally, a flow or operation will be considered covered even if its test’s suite did not run during the current build. The mere existence of a test for a flow or operation is enough to consider it as covered.
Build Tool Log¶
The builder log is saved at cslang-builder/logs/builder.log
.
Maven Log¶
Log files of Maven activity are saved at cslang-builder/logs/maven/
. Each
artifact’s activity is stored in a file named with the convention
<group>_<artifact>_<version>.log
.
Maven Content Compiler¶
The CloudSlang Maven Content Compiler can be used to compile CloudSlang source files and receive indications of errors without using the CloudSlang CLI or Build tool.
The CloudSlang Maven Content Compiler is an artifact in the cloud-slang module. It extends the Plexus Compiler Project in order to leverage the use of the existing maven-compiler-plugin.
To use the compiler, make the artifact available in the classpath when the Compiler Plugin runs. This is achieved by adding a dependency when declaring the plugin in your project’s pom.xml.
The example below shows how to use the compiler:
<project>
[...]
<build>
[...]
<plugins>
[...]
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<compilerId>cloudslang</compilerId>
</configuration>
<dependencies>
<dependency>
<groupId>io.cloudslang.lang</groupId>
<artifactId>cloudslang-content-maven-compiler</artifactId>
<version><any_version></version>
</dependency>
</dependencies>
</plugin>
[...]
<plugins>
[...]
<build>
[...]
</project>
CloudSlang Editors¶
Although CloudSlang files can be composed in any text editor, using a modern code editor with support for syntax highlighting is recommended.
Atom¶
The language-cloudslang Atom package includes CloudSlang syntax highlighting and many code snippets.
Download, Install and Configure Atom¶
- Download and install Atom.
- Download and install the CloudSlang language package.
- From the Atom UI: File > Settings > Install and search for language-cloudslang
- From the command line:
apm install language-cloudslang
Note
If you are behind a proxy server you may need to configure Atom as described in their package manager documentation.
- Reload (View > Reload) or restart Atom.
- Files saved with the .sl extension will be recognized within Atom as CloudSlang files.
Snippets¶
To use the snippets start typing the snippet name and press enter when it appears on the screen.
The following snippets are provided:
Keyword | Description |
---|---|
flow | template for a flow file |
operation | template for an operation file |
decision | template for an decision file |
properties | template for a system properties file |
java_action | template for a Java action |
python_action | template for a Python action |
input | template for simple input name and value |
input with properties | template for an input with all possible properties |
output | template for an output name and value |
output with properties | template for an output with all possible properties |
result | template for a result name and value |
publish | template for a published variable name and value |
import | template for an import alias name and namespace |
navigate | template for a result mapped to a navigation target |
step | template for a standard step |
on_failure | template for an on_failure step |
for | template for an iterative step |
parallel_loop | template for a parallel step |
property | template for a system property |
property with properties | template for a system property with all possible properties |
@input | template for input documentation |
@description | template for file description documentation |
@prerequisites | template for prerequisite documentation |
@output | template for output documentation |
@result | template for result documentation |
Atom Troubleshooting¶
For troubleshooting Atom issues, see the Atom documentation and discussion board.
Developer¶
Contents:
Developer Overview¶
This section contains a brief overview of how CloudSlang and the CloudSlang Orchestration Engine (Score) work. For more detailed information see the Score API and Slang API sections.
The CloudSlang Orchestration Engine is an engine that runs workflows. Internally, the workflows are represented as ExecutionPlans. An ExecutionPlan is essentially a map of IDs and ExecutionSteps. Each ExecutionStep contains information for calling an action method and a navigation method.
When an ExecutionPlan is
triggered it executes the first
ExecutionStep’s action method and
navigation method. The navigation method returns the ID of the next
ExecutionStep to run. Execution
continues in this manner, successively calling the next
ExecutionStep’s action and
navigation methods, until a navigation method returns null
to
indicate the end of the flow.
CloudSlang plugs into the CloudSlang Orchestration Engine (Score) by compiling its workflow and operation files into Score ExecutionPlans and then triggering them. Generally, when working with CloudSlang content, all interaction with Score goes through the Slang API, not the Score API.
CloudSlang¶
Embedded CloudSlang¶
CloudSlang content can be run from inside an existing Java application using Maven and Spring by embedding the CloudSlang Orchestration Engine (Score) and interacting with it through the Slang API.
Embed CloudSlang in a Java Application¶
- Add the Score and CloudSlang dependencies to the project’s pom.xml
file in the
<dependencies>
tag.
<dependency>
<groupId>io.cloudslang</groupId>
<artifactId>score-all</artifactId>
<version>0.3.28</version>
</dependency>
<dependency>
<groupId>io.cloudslang.lang</groupId>
<artifactId>cloudslang-all</artifactId>
<version>0.9.60</version>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.3.175</version>
</dependency>
- Add Score and CloudSlang configuration to your Spring application context xml file.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:score="http://www.cloudslang.io/schema/score"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.cloudslang.io/schema/score
http://www.cloudslang.io/schema/score.xsd">
<bean class="io.cloudslang.lang.api.configuration.SlangSpringConfiguration"/>
<score:engine />
<score:worker uuid="-1"/>
</beans>
- Get the Slang bean from the application context xml file and interact with it using the Slang API.
ApplicationContext applicationContext =
new ClassPathXmlApplicationContext("/META-INF/spring/cloudSlangContext.xml");
Slang slang = applicationContext.getBean(Slang.class);
slang.subscribeOnAllEvents(new ScoreEventListener() {
@Override
public void onEvent(ScoreEvent event) {
System.out.println(event.getEventType() + " : " + event.getData());
}
});
Slang API¶
The Slang API allows a program to interact with the CloudSlang Orchestration Engine (Score) using content authored in CloudSlang. What follows is a brief discussion of the API using a simple example that compiles and runs a flow while listening for the events that are fired during the run.
Example¶
Code¶
Java Class - CloudSlangEmbed.java
package io.cloudslang.example;
import io.cloudslang.score.events.ScoreEvent;
import io.cloudslang.score.events.ScoreEventListener;
import io.cloudslang.lang.api.Slang;
import io.cloudslang.lang.compiler.SlangSource;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import java.io.File;
import java.io.IOException;
import java.io.Serializable;
import java.net.URISyntaxException;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Set;
public class CloudSlangEmbed {
public static void main(String[] args) throws URISyntaxException, IOException{
ApplicationContext applicationContext =
new ClassPathXmlApplicationContext("/META-INF/spring/cloudSlangContext.xml");
Slang slang = applicationContext.getBean(Slang.class);
slang.subscribeOnAllEvents(new ScoreEventListener() {
@Override
public void onEvent(ScoreEvent event) {
System.out.println(event.getEventType() + " : " + event.getData());
}
});
File flowFile = getFile("/content/hello_world.sl");
File operationFile = getFile("/content/print.sl");
Set<SlangSource> dependencies = new HashSet<>();
dependencies.add(SlangSource.fromFile(operationFile));
HashMap<String, Serializable> inputs = new HashMap<>();
inputs.put("input1", "Hi. I'm inside this application.\n-CloudSlang");
slang.compileAndRun(SlangSource.fromFile(flowFile), dependencies, inputs,
new HashMap<String, Serializable>());
}
private static File getFile(String path) throws URISyntaxException {
return new File(CloudSlangEmbed.class.getResource(path).toURI());
}
}
Flow - hello_world.sl
namespace: resources.content
imports:
ops: resources.content
flow:
name: hello_world
inputs:
- input1
workflow:
- sayHi:
do:
ops.print:
- text: "input1"
Operation - print.sl
namespace: resources.content
operation:
name: print
inputs:
- text
python_action:
script: print text
results:
- SUCCESS
Discussion¶
The program begins by creating the Spring application context and getting the Slang bean. In general, most of the interactions with Score are transmitted through the reference to this bean.
ApplicationContext applicationContext =
new ClassPathXmlApplicationContext("/META-INF/spring/cloudSlangContext.xml");
Slang slang = applicationContext.getBean(Slang.class);
Next, the subscribeOnAllEvents
method is called and passed a new
ScoreEventListener
to listen to all the
Score and Slang events that are fired.
slang.subscribeOnAllEvents(new ScoreEventListener() {
@Override
public void onEvent(ScoreEvent event) {
System.out.println(event.getEventType() + " : " + event.getData());
}
});
The ScoreEventListener
interface defines only one method, the
onEvent
method. In this example the onEvent
method is overridden
to print out the type and data of all events it receives.
The API also contains a method subscribeOnEvents
, which takes in a
set of the event types to listen for and a method
unSubscribeOnEvents
, which unsubscribes the listener from all the
events it was listening for.
Next, the two content files, containing a flow and an operation
respectively, are loaded into File
objects.
File flowFile = getFile("/content//hello_world.sl");
File operationFile = getFile("/content/print.sl");
These File
objects will be used to create the two SlangSource
objects needed to compile and run the flow and its operation.
A SlangSource
object is a representation of source code written in
CloudSlang along with the source’s name. The SlangSource
class
exposes several static
methods for creating new SlangSource
objects from files, URIs or arrays of bytes.
Next, a set of dependencies is created and the operation is added to the set.
Set<SlangSource> dependencies = new HashSet<>();
dependencies.add(SlangSource.fromFile(operationFile));
A flow containing many operations or subflows would need all of its dependencies loaded into the dependency set.
Next, a map of input names to values is created. The input names are as
they appear under the inputs
key in the flow’s CloudSlang file.
HashMap<String, Serializable> inputs = new HashMap<>();
inputs.put("input1", "Hi. I'm inside this application.\n-CloudSlang");
Finally, the flow is compiled and run by providing its SlangSource
,
dependencies, inputs and an empty map of system properties.
slang.compileAndRun(SlangSource.fromFile(flowFile), dependencies,
inputs, new HashMap<String, Serializable>());
An operation can be compiled and run in the same way.
Although we compile and run here in one step, the process can be broken
up into its component parts. The Slang
interface exposes a method to
compile a flow or operation without running it. That method returns a
CompliationArtifact
which can then be run with a call to the run
method.
A CompilationArtifact
is composed of a Score ExecutionPlan
, a
map of dependency names to their ExecutionPlan
s and a list of
CloudSlang Input
s.
A CloudSlang Input
contains its name, expression and the state of
all its input properties (e. g. required).
Slang Events¶
CloudSlang uses Score and
its own extended set of Slang events. Slang events are comprised of an
event type string and a map of event data that contains all the relevant
event information mapped to keys defined in the
org.openscore.lang.runtime.events.LanguageEventData
class. All fired
events are logged in the execution log file.
Events that contain SensitiveValue
s will have the sensitive data replaced
by the ********
placeholder.
Event types from CloudSlang are listed in the table below along with the event data each event contains.
All Slang events contain the data in the following list. Additional event data is listed in the table below alongside the event type. The event data map keys are enclosed in square brackets - [KEYNAME].
- [DESCRIPTION]
- [TIMESTAMP]
- [EXECUTIONID]
- [PATH]
- [STEP_TYPE]
- [STEP_NAME]
- [TYPE]
Type [TYPE] | Usage | Event Data | |
---|---|---|---|
EVENT_INPUT_START | Input binding
started for
flow or operation
|
[INPUTS] | |
EVENT_INPUT_END | Input binding
finished for
flow or operation
|
[BOUND_INPUTS] | |
EVENT_STEP_START | Step started | ||
EVENT_ARGUMENT_START | Argument binding
started for step
|
[ARGUMENTS] | |
EVENT_ARGUMENT_END | Step arguments
resolved
|
[BOUND_ARGUMENTS] | |
EVENT_OUTPUT_START | Output binding | | [executableResults]
started for | | [executableOutputs]
flow or operation | | [actionReturnValues]
|
||
EVENT_OUTPUT_END | Output binding | | [OUTPUTS]
finished for | | [RESULT]
flow or operation | | [EXECUTABLE_NAME]
|
||
EVENT_OUTPUT_START | Output binding | | [operationReturnValues]
started for step | | [stepNavigationValues]
| | [stepPublishValues]
|
||
EVENT_OUTPUT_END | Output binding
finished for step
|
[nextPosition]
[RESULT]
[OUTPUTS]
|
|
EVENT_EXECUTION_FINISHED | Execution finished
running
|
[RESULT]
[OUTPUTS]
|
|
EVENT_ACTION_START | Before action
invocation
|
[TYPE]
[CALL_ARGUMENTS]
|
|
EVENT_ACTION_END | After successful
action invocation
|
[RETURN_VALUES] | |
EVENT_ACTION_ERROR | Exception in
action execution
|
[EXCEPTION] | |
EVENT_SPLIT_BRANCHES | parallel loop
expression bound
|
[BOUND_PARALLEL_LOOP_EXPRESSION] | |
EVENT_BRANCH_START | Parallel loop
branch created
|
[splitItem]
[refId]
|
|
EVENT_BRANCH_END | Parallel loop
branch ended
|
[branchReturnValues] | |
EVENT_JOIN_BRANCHES_START | Parallel loop output
binding started
|
[stepNavigationValues]
[stepAggregateValues]
|
|
EVENT_JOIN_BRANCHES_END | Parallel loop output
binding finished
|
[nextPosition]
[RESULT]
[OUTPUTS]
|
|
SLANG_EXECUTION_EXCEPTION | Exception in
previous step
|
[IS_BRANCH]
[executionIdContext]
[systemContext]
[EXECUTION_CONTEXT]
|
Engine¶
Embedded Engine¶
The CloudSlang Orchestration Engine (Score) can be embedded inside an existing Java application using Maven and Spring. Interaction with Score is done through the Score API.
Embed Score in a Java Application¶
- Add the Score dependencies to the project’s pom.xml file in the
<dependencies>
tag.
<dependency>
<groupId>io.cloudslang</groupId>
<artifactId>score-all</artifactId>
<version>0.3.28</version>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.3.175</version>
</dependency>
- Add Score configuration to your Spring application context xml file.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:score="http://www.openscore.org/schema/score"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.openscore.org/schema/score
http://www.openscore.org/schema/score.xsd">
<score:engine/>
<score:worker uuid="-1"/>
<bean class="io.openscore.example.ScoreEmbed"/>
</beans>
- Interact with Score using the Score API.
package io.cloudslang.example;
import org.apache.log4j.Logger;
import io.cloudslang.score.api.*;
import io.cloudslang.score.events.EventBus;
import io.cloudslang.score.events.EventConstants;
import io.cloudslang.score.events.ScoreEvent;
import io.cloudslang.score.events.ScoreEventListener;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import java.io.Serializable;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Set;
public class ScoreEmbed {
@Autowired
private Score score;
@Autowired
private EventBus eventBus;
private final static Logger logger = Logger.getLogger(ScoreEmbed.class);
private ApplicationContext context;
private final Object lock = new Object();
public static void main(String[] args) {
ScoreEmbed app = loadApp();
app.registerEventListener();
app.start();
}
private static ScoreEmbed loadApp() {
ApplicationContext context = new ClassPathXmlApplicationContext("/META-INF/spring/scoreContext.xml");
ScoreEmbed app = context.getBean(ScoreEmbed.class);
app.context = context;
return app;
}
private void start() {
ExecutionPlan executionPlan = createExecutionPlan();
score.trigger(TriggeringProperties.create(executionPlan));
waitForExecutionToFinish();
closeContext();
}
private void waitForExecutionToFinish() {
try {
synchronized(lock){
lock.wait(10000);
}
} catch (InterruptedException e) {
logger.error(e.getStackTrace());
}
}
private static ExecutionPlan createExecutionPlan() {
ExecutionPlan executionPlan = new ExecutionPlan();
executionPlan.setFlowUuid("1");
ExecutionStep executionStep0 = new ExecutionStep(0L);
executionStep0.setAction(new ControlActionMetadata("io.cloudslang.example.controlactions.ConsoleControlActions", "printMessage"));
executionStep0.setActionData(new HashMap<String, Serializable>());
executionStep0.setNavigation(new ControlActionMetadata("io.cloudslang.example.controlactions.NavigationActions", "nextStepNavigation"));
executionStep0.setNavigationData(new HashMap<String, Serializable>());
executionPlan.addStep(executionStep0);
ExecutionStep executionStep1 = new ExecutionStep(1L);
executionStep1.setAction(new ControlActionMetadata("io.cloudslang.example.controlactions.ConsoleControlActions", "printMessage"));
executionStep1.setActionData(new HashMap<String, Serializable>());
executionStep1.setNavigation(new ControlActionMetadata("io.cloudslang.example.controlactions.NavigationActions", "nextStepNavigation"));
executionStep1.setNavigationData(new HashMap<String, Serializable>());
executionPlan.addStep(executionStep1);
ExecutionStep executionStep2 = new ExecutionStep(2L);
executionStep2.setAction(new ControlActionMetadata("io.cloudslang.example.controlactions.ConsoleControlActions", "failed"));
executionStep2.setActionData(new HashMap<String, Serializable>());
executionStep2.setNavigation(new ControlActionMetadata("io.cloudslang.example.controlactions.NavigationActions", "endFlow"));
executionStep2.setNavigationData(new HashMap<String, Serializable>());
executionPlan.addStep(executionStep2);
return executionPlan;
}
private void registerEventListener() {
Set<String> handlerTypes = new HashSet<>();
handlerTypes.add(EventConstants.SCORE_FINISHED_EVENT);
handlerTypes.add(EventConstants.SCORE_FAILURE_EVENT);
eventBus.subscribe(new ScoreEventListener() {
@Override
public void onEvent(ScoreEvent event) {
logger.info("Listener " + this.toString() + " invoked on type: " + event.getEventType() + " with data: " + event.getData());
synchronized (lock) {
lock.notify();
}
}
}, handlerTypes);
}
private void closeContext() {
((ConfigurableApplicationContext) context).close();
}
}
Score API¶
The Score API allows a program to interact with the CloudSlang Orchestration Engine (Score). This section describes some of the more commonly used interfaces and methods from the Score API.
ExecutionPlan¶
An ExecutionPlan is a map of IDs and steps, called ExecutionSteps, representing a workflow for Score to run. Normally, the ID of the first step to be run is 0.
ExecutionSteps can be added to the ExecutionPlan
using the addStep(ExecutionStep step)
method.
The starting step of the ExecutionPlan can be set using the
setBeginStep(Long beginStep)
method.
ExecutionStep¶
An ExecutionStep is the a building block upon which an
ExecutionPlan is built. It consists of an ID
representing its position in the plan, control action information and
navigation action information. As each ExecutionStep is reached, its
control action method is called followed by its navigation action
method. The navigation action method returns the ID of the next
ExecutionStep to be run in the ExecutionPlan or
signals the plan to stop by returning null
. The ID of an
ExecutionStep must be unique among the steps in its
ExecutionPlan.
The control action method and navigation action methods can be set in
the ExecutionStep using the following methods, where a
ControlActionMetadata
object is created using string values of the
method’s fully qualified class name and method name:
setAction(ControlActionMetadata action)
setNavigation(ControlActionMetadata navigationMetadata)
Action Method Arguments¶
Both the control action and navigation action are regular Java methods which can take arguments. They are invoked by reflection and their arguments are injected by the Score engine, so there is no API or naming convention for them. But there are some names that are reserved for special use.
There are several ways Score can populate an action method’s arguments:
From the execution context that is passed to the TriggeringProperties when the ExecutionPlan is triggered.
When a method such as
public void doSomething(String argName)
is encountered, Score will attempt to populate the argumentargName
with a value mapped to the keyargName
in the execution context. If the keyargName
does not exist in the map, the argument will be populated withnull
.From data values set in the ExecutionSteps during the creation of the ExecutionPlan.
Data can be set using the
setActionData
andsetNavigationData
methods.From reserved argument names.
There are some argument names that have a special meaning when used as control action or navigation action method arguments:
- executionRuntimeServices - Score will populate this argument with the ExecutionRuntimeServices object.
public void doWithServices(ExecutionRuntimeServices executionRuntimeServices)
- executionContext - Score will populate this argument with the context tied to the ExecutionPlan during its triggering through the TriggeringProperties.
public void doWithContext(Map<String, Serializable> executionContext)
If an argument is present in both the ExecutionStep data and the execution context, the value from the execution context will be used.
Action Method Return Values¶
- Control action methods are
void
and do not return values. - Navigation action methods return a value of type
Long
, which is used to determine the next ExecutionStep. Returningnull
signals the ExecutionPlan to finish.
Score Interface¶
The Score interface exposes methods for triggering and canceling executions.
Triggering New Executions¶
The trigger(TriggeringProperties triggeringProperties)
method starts
an execution with a given ExecutionPlan and the
additional properties found in the
TriggeringProperies object. The method
returns the ID of the new execution.
By default the first executed step will be the execution plan’s start step, and the execution context will be empty.
Canceling Executions¶
The cancelExecution(Long executionId)
method requests to cancel
(terminate) a given execution. It is passed the ID that was returned
when triggering the execution that is now to be canceled.
Note
The execution will not necessarily be stopped immediately.
TriggeringProperties¶
A TriggeringProperties object is sent to the Score interface’s trigger method when the execution begins.
The TriggeringProperties object contains:
- An ExecutionPlan to run.
- The ExecutionPlan’s dependencies, which are ExecutionPlans themselves.
- A map of names and values to be added to the execution context.
- A map of names and values to be added to the ExecutionRuntimeServices.
- A start step value, which can cause the ExecutionPlan to start from a step that is not necessarily its defined begin step.
The TriggeringProperties class exposes methods to create a TriggeringProperties object from an ExecutionPlan and then optionally set the various other properties.
ExecutionRuntimeServices¶
The ExecutionRuntimeServices provide a way to communicate with Score during the execution of an ExecutionPlan. During an execution, after each ExecutionStep, the engine will check the ExecutionRuntimeServices to see if there have been any requests made of it and will respond accordingly. These services can be used by a language written on top of Score, as CloundSlang does, to affect the runtime behavior.
The ExecutionRuntimeServices can be injected into an
ExecutionStep’s action or navigation method’s
arguments by adding the
ExecutionRuntimeServices executionRuntimeServices
parameter to the
method’s argument list.
Some of the services provided by ExecutionRuntimeServices are:
- Events can be added using the
addEvent(String eventType, Serializable eventData)
method. - Execution can be paused using the
pause()
method. - Errors can be set using the
setStepErrorKey(String stepErrorKey)
method. - Branches can be added using the
addBranch(Long startPosition, String flowUuid, Map<String, Serializable> context)
method or theaddBranch(Long startPosition, Long executionPlanId, Map<String, Serializable> context, ExecutionRuntimeServices executionRuntimeServices)
method. - Requests can be made to change the ExecutionPlan that is running by
calling the
requestToChangeExecutionPlan(Long executionPlanId)
method.
EventBus¶
The EventBus allows you to subscribe and unsubscribe listeners for events.
Listeners must implement the ScoreEventListener
interface which
consists of a single method – onEvent(ScoreEvent event)
.
To subscribe a listener for certain events, pass a set of the events to
listen for to the
subscribe(ScoreEventListener eventHandler, Set<String> eventTypes)
method.
The event types are defined in the EventConstants
class.
To unsubscribe a listener from all the events it was listening for call
the unsubscribe(ScoreEventListener listener)
method.
ScoreEvent¶
A ScoreEvent is comprised of a string value corresponding to its type
and a map containing the event data, which can be accessed using the
getEventType()
and getData()
methods respectively.
Score Events¶
The CloudSlang Orchestration Engine (Score) defines two events that may be fired during execution. Each event is comprised of a string value corresponding to its type and a map containing the event data.
Event Types:
- SCORE_FINISHED_EVENT
- SCORE_FAILURE_EVENT
Event Data Keys:
- IS_BRANCH
- executionIdContext
- systemContext
- EXECUTION_CONTEXT
A language built upon Score can add events during runtime using the ExecutionRuntimeServices API. An example of this usage can be seen in CloudSlang’s addition of Slang events.
Architecture¶
Overview¶
To be run by the CloudSlang Orchestration Engine (Score), a CloudSlang
source file must undergo a process to transform it into a Score
ExecutionPlan using the SlangCompiler
.
Precompilation¶
In the precompilation process, the source file is loaded, along with its
dependencies if necessary, and parsed. The CloudSlang
file’s YAML structure is translated into Java maps by the YamlParser
using snakeyaml. The parsed structure is
then modeled to Java objects representing the parts of a flow and
operation by the SlangModeller
and the ExecutableBuilder
. The
result of this process is an object of type Executable
.
Compilation¶
The resulting Executable
object, along with its dependent
Executable
objects, are then passed to the ScoreCompiler
for
compilation. An ExecutionPlan
is created from the Executable
using the ExecutionPlanBuilder
.
The ExecutionPlanBuilder
uses the ExecutionStepFactory
to
manufacture the appropriate Score ExecutionStep objects and add
them to the resulting ExecutionPlan, which is then
packaged with its dependent ExecutionPlan objects into a
CompilationArtifact
.
Running¶
Now that the CloudSlang source has been fully transformed into an
ExecutionPlan it can be run using Score. The
ExecutionPlan and its
dependencies are extracted from the CompilationArtifact
and used to
create a TriggeringProperties
object. A RunEnvironment is also created and
added to the TriggeringProperties
context. The RunEnvironment provides services
to the ExecutionPlan as it
runs, such as keeping track of the context stack and next step position.
Treatment of Flows and Operations¶
Generally, CloudSlang treats flows and operations similarly.
Flows and operations both:
- Receive inputs, produce outputs, and have navigation logic.
- Can be called by a flow’s step.
- Are compiled to
ExecutionPlan
s that can be run by Score.
Scoped Contexts¶
As execution progresses from flow to operation to action, the step data
(inputs, outputs, etc.) that is in scope changes. These contexts are
stored in the contextStack
of the
RunEnvironment and get pushed onto and popped
off as the scope changes.
There are three types of scoped contexts:
- Flow context
- Operation context
- Action context

Value Types¶
Each Context
stores its data in a Map<String, Value>
named
variables
, where Value
declares the isSensitive()
method and is one
of three value types:
SimpleValue
SensitiveValue
PyObjectValue
SimpleValue
is used for non-sensitive inputs, outputs and arguments.
SensitiveValue
is used for sensitive inputs, outputs and arguments.
Calling the toString()
method on a SensitiveValue
will return the
SENSITIVE_VALUE_MASK
(********
) instead of its content. During runtime,
a SensitiveValue
is decrypted upon usage and then encrypted again.
PyObjectValue
is an interface which extends Value
, adding the
isAccessed()
method. An object of this type is a (Javassist) proxy,
which extends a given PyObject
instance and implements the PyObjectValue
interface. Value
method calls are delegated to an inner Value
instance,
which can be either a SimpleValue
or SensitiveValue
. PyObject
method
calls are delegated to an inner PyObject
, the original one this object is
extending. PyObject
method calls also change an accessed
flag to true.
This flag indicates whether the value was used in a Python script.
Value types, SimpleValue
or SensitiveValue
are propagated automatically
from inputs to arguments and Python expression evaluation outputs. An argument
or output is sensitive if at least one part of it is sensitive. For example, the
result of a + b
or some_func(a)
will be sensitive if a
is sensitive.
Before running a Python expression all the arguments which are passed to it are
converted to PyObjectValue
. When the expression finishes, all the arguments
are checked. If at least one sensitive argument was used the output will be
sensitive as well.
As opposed to expressions, the output types of Java and Python operations, are not propogated automatically to the operation’s outputs. Doing so would cause all outputs of an operation to be sensitive every time at least one input was sensitive. Instead, none of the operation’s action’s data appears in the logs and a content author explicitly marks an operation’s outputs as sensitive when needed. This approach ensures that sensitive data is hidden at all times while still allowing for full control over which operation outputs are sensitive and which are not.
Types of ExecutionSteps¶
As flows and operations are compiled, they are broken down into a number
of ExecutionSteps. These
steps are built using their corresponding methods in the
ExecutionStepFactory
.
There are five types of ExecutionSteps used to build a CloudSlang ExecutionPlan:
- Start
- End
- Begin Step
- End Step
- Action
An operation’s ExecutionPlan is built from a Start step, an Action step and an End step.
A flow’s ExecutionPlan is
built from a Start step, a series of Begin Step steps and End Step
steps, and an End step. Each step’s ExecutionSteps
hand off the
execution to other ExecutionPlan objects representing
operations or subflows.

RunEnvironment¶
The RunEnvironment
provides services to the
ExecutionPlan as it is
running. The different types of execution steps read from, write
to and update the environment.
The RunEnvironment
contains:
- callArguments - call arguments of the current step
- returnValues - return values for the current step
- nextStepPosition - position of the next step
- contextStack - stack of contexts of the parent scopes
- parentFlowStack - stack of the parent flows’ data
- executionPath - path of the current execution
- systemProperties - system properties
- serializableDataMap - serializable data that is common to the entire run
Engine Architecture¶
The CloudSlang Orchestration Engine (Score) is built from two main components, an engine and a worker. Scaling is achieved by adding additional workers and/or engines.

Engine¶
The engine is responsible for managing the workers and interacting with the database. It does not hold any state information itself.
The engine is composed of the following components:
- Orchestrator: Responsible for creating new executions, canceling existing executions, providing the status of existing executions and managing the split/join mechanism.
- Assigner: Responsible for assigning workers to executions.
- Queue: Responsible for storing execution information in the database and responding with messages to polling workers.
Worker¶
The worker is responsible for doing the actual work of running the execution plans. The worker holds the state of an execution as it is running.
The worker is composed of the following components:
- Worker Manager: Responsible for retrieving messages from the queue and placing them in the in-buffer, delegating messages to the execution service, draining messages from the out-buffer to the orchestrator and updating the engine as to the worker’s status.
- Execution Service: Responsible for executing the execution steps, pausing and canceling executions, splitting executions and dispatching relevant events.
Database¶
The database is composed of the following tables categorized here by their main functions:
- Execution tracking:
- RUNNING_EXECUTION_PLANS: full data of an execution plan and all of its dependencies
- EXECUTION_STATE: run statuses of an execution
- EXECUTION_QUEUE_1: metadata of execution message
- EXECUTION_STATES_1 and EXECUTION_STATES_2: full payloads of execution messages
- Splitting and joining executions:
- SUSPENDED_EXECUTIONS: executions that have been split
- FINISHED_BRANCHES: finished branches of a split execution
- Worker information:
- WORKER_NODES: info of individual workers
- WORKER_GROUPS: info of worker groups
- Recovery:
- WORKER_LOCKS: row to lock on during recovery process
- VERSION_COUNTERS: version numbers for testing responsiveness
Typical Execution Path¶
In a typical execution the orchestrator receives an
ExecutionPlan along with all
that is needed to run it in a
TriggeringProperties
object through a call to the Score interface’s trigger
method.
The orchestrator inserts the full
ExecutionPlan with all of its
dependencies into the RUNNING_EXECUTION_PLANS
table. An
Execution
object is then created based on the
TriggeringProperties
and an EXECUTION_STATE
record is inserted indicating that the
execution is running. The Execution
object is then wrapped into an
ExecutionMessage
. The assigner assigns the ExecutionMessage
to a worker and places the message metadata into the
EXECUTION_QUEUE_1
table and its Payload
into the active
EXECUTION_STATES
table.
The worker manager constantly polls the queue to see if there
are any ExecutionMessage
s that have been assigned to it. As
ExecutionMessage
s are found, the worker acknowledges that they
were received, wraps them as SimpleExecutionRunnable
s and submits
them to the execution service. When a thread is available from the
execution service’s pool the execution will run one step (control
action and navigation action) at a time until there is a reason for it
to stop. There are various reasons for a execution to stop running on
the worker and return to the engine including: the execution is
finished, is about to split or it is taking too long. Once an execution
is stopped it is placed on the out-buffer which is periodically drained
back to the engine.
If the execution is finished, the engine fires a
SCORE_FINISHED_EVENT
and removes the execution’s information from
all of the execution tables in the database.
Splitting and Joining Executions¶
Before running each step, a worker checks to see if the step to be run
is a split step. If it is a split step, the worker creates a list of the
split executions. It puts the execution along with all its split
executions into a SplitMessage
which is placed on the out-buffer.
After draining, the orchestrator’s split-join service takes care of the
executions until they are to be rejoined. The service places the parent
execution into the SUSPENDED_EXECUTIONS
table with a count of how
many branches it has been split into. Execution
s are created for
the split branches and placed on the queue. From there, they are picked
up as usual by workers and when they are finished they are added to the
FINISHED_BRANCHES
table. Periodically, a job runs to see if the
number of branches that have finished are equal to the number of
branches the original execution was split into. Once all the branches
are finished the original execution can be placed back onto the queue to
be picked up again by a worker.
Recovery¶
The recovery mechanism allows Score to recover from situations that would cause a loss of data otherwise. The recovery mechanism guarantees that each step of an execution plan will be run, but does not guarantee that it will be run only once. The most common recovery situations are outlined below.
Lost Worker¶
To prevent the loss of data from a worker that is no longer responsive
the recovery mechanism does the following. Each worker continually
reports their active status to the engine which stores a reporting
version number for the worker in the WORKER_NODES
table.
Periodically a recovery job runs and sees which workers’ reported
version numbers are outdated, indicating that they have not been
reporting back. The non-responsive workers’ records in the queue get
reassigned to other workers that pick up from the last known step that
was executed.
Worker Restart¶
To prevent the loss of data from a worker that has been restarted additional measures must be taken. The restarted worker will report that it is active, so the recovery job will not know to reassign the executions that were lost when it was restarted. Therefore, every time a worker has been started an internal recovery is done. The worker’s buffers are cleaned and the worker reports to the engine that it is starting up. The engine then checks the queue to see if that worker has anything that’s already on the queue. Whatever is found is passed on to a different worker while the restarted one finishes starting up before polling for new messages.
Contributions¶
GitHub Repositories¶
The CloudSlang project consists of the following repositories on GitHub with the dependencies depicted in the diagram below.
Dependency diagram

- score - CloudSlang Orchestration Engine (Score)
- dependency-management
- engine
- package
- runtime-management
- score-api
- score-samples
- score-tests
- worker
- cloud-slang - CloudSlang and the CLI
- build
- cloudslang-all
- cloudslang-cli
- cloudslang-commons
- cloudslang-compiler
- cloudslang-content-maven-compiler
- cloudslang-content-verifier
- cloudslang-entities
- cloudslang-runtime
- cloudslang-spi
- cloudslang-tests
- cloud-slang-content - CloudSlang flows and operations
- ci-env
- configuration/properties/io/cloudslang
- content/io/cloudslang
- amazon
- aws
- base
- cmd
- comparisons
- datetime
- examples
- filesystem
- http
- json
- lists
- maps
- math
- network
- os
- remote_file_transfer
- scripts
- ssh
- strings
- utils
- xml
- chef
- ci
- circleci
- consul
- coreos
- digital_ocean
- docker
- git
- haven_on_demand
- heroku
- hp_cloud
- itsm
- service_now
- jenkins
- marathon
- microsoft
- azure
- new_relic
- openshift
- openstack
- operations_orchestration
- stackato
- vmware
- vcenter
- (other integrations to be added as new folders)
- amazon
- cs-actions - Java @Action classes used by CloudSlang
- cs-amazon
- cs-azure
- cs-date-time
- cs-http-client
- cs-json
- cs-lists
- cs-mail
- cs-powershell
- cs-rft
- cs-ssh
- cs-utilities
- cs-vmware
- cs-xml
- score-content-sdk - SDK for developing Java @Actions
- src/main/java/com/hp/oo/sdk/content
- annotations
- plugin
- ActionMetadata
- src/main/java/com/hp/oo/sdk/content
- test-functional - Global functional tests for CLI and builder
- CloudSlang-Docker-Image - CloudSlang Docker image
- CloudSlang.github.io - CloudSlang website
- docs - CloudSlang documentation
- atom-cloudslang-package - Atom package for CloudSlang support
- cloudslang-cli - npm cloudslang-cli
Contribution Guide¶
We welcome and encourage community contributions to CloudSlang. Please familiarize yourself with the Contribution Guidelines and Project Roadmap before contributing.
There are many ways to help the CloudSlang project:
- Report issues
- Fix issues
- Improve the documentation
Contributing Code¶
The best way to directly collaborate with the project contributors is through GitHub: https://github.com/CloudSlang.
- If you want to contribute to our code by either fixing a problem or creating a new feature, please open a GitHub pull request.
- If you want to raise an issue such as a defect, an enhancement request or a general issue, please open a GitHub issue.
All patches from all contributors get reviewed.
After a pull request is made, other contributors will offer feedback. If the patch passes review, a maintainer will accept it with a comment.
When a pull request fails testing, the author is expected to update the pull request to address the failure until it passes testing and the pull request merges successfully.
At least one review from a maintainer is required for all patches (even patches from maintainers).
Content contributions which require environments that are difficult to setup
may be accepted as beta content. Beta content is not verified or tested by the
CloudSlang team. Beta content is named with the beta_
prefix. The community
is encouraged to assist in setting up testing environments for the beta content.
See the contributing.md file in the relevant repository for additional guidelines specific to that repository.
Developer’s Certificate of Origin¶
All contributions must include acceptance of the DCO:
Developer Certificate of Origin Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 660 York Street, Suite 102, San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Developer’s Certificate of Origin 1.1
By making a contribution to this project, I certify that:
- The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
- The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or
- The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.
- I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
Sign your work¶
To accept the DCO, simply add this line to each commit message with your
name and email address (git commit -s
will do this for you):
Signed-off-by: Jane Example <jane@example.com>
For legal reasons, no anonymous or pseudonymous contributions are accepted.
Pull Requests¶
We encourage and support contributions from the community. No fix is too small. We strive to process all pull requests as soon as possible and with constructive feedback. If your pull request is not accepted at first, please try again after addressing the feedback you received.
To make a pull request you will need a GitHub account. For help, see GitHub’s documentation on forking and pull requests.
Normally, all pull requests must include tests that validate your change. Occasionally, a change will be very difficult to test. In those cases, please include a note in your commit message explaining why tests are not included.
Conduct¶
Whether you are a regular contributor or a newcomer, we care about making this community a safe place for you.
We are committed to providing a friendly, safe and welcoming environment for all regardless of their background and the extent of their contributions.
Please avoid using nicknames that might detract from a friendly, safe and welcoming environment for all. Be kind and courteous.
Those who insult, demean or harass anyone will be excluded from interaction. In particular, behavior that excludes people in socially marginalized groups will not be tolerated.
We welcome discussion about creating a welcoming, safe and productive environment for the community. If you have any questions, feedback or concerns please let us know. (info@cloudslang.io)
Tutorial¶
Contents:
Lesson 1 - Introduction and Setup¶
Goal¶
In this lesson we’ll outline our overall goals for this tutorial and set up an environment to write and run flows.
Overview¶
In this tutorial we will build a flow that represents the process a new hire must go through to get set up to work. We will build the flow one piece at a time with the goal of highlighting the features of CloudSlang. We recommend you follow along with the process, writing the flows and operations we will build on your own machine and running them using the CloudSlang CLI. To do so, you’ll need a text editor to create the CloudSlang files and the CloudSlang CLI to run them.
YAML¶
CloudSlang is a YAML-based language so it’s important to know a bit about YAML before getting started. If you’re new to YAML, you can take a look at the YAML Overview section of the CloudSlang documentation to familiarize yourself with its main structure. This tutorial will also include YAML Notes to guide you through potential trouble areas.
Copy/Pasting Code¶
Because proper indentation is so important in YAML, take care to indent pasted code examples to their proper indentation levels. The general rules for indentation can be found in the structured outlines of CloudSlang files found in the CloudSlang Files section of the DSL Reference.
If you are unsure what the indentation level is for a particular code snippet, you can take a look at where it fits into the rest of the code in the New Code - Complete section at the bottom of each lesson or by dowloading the lesson’s code.
Prerequisites¶
This tutorial uses the CloudSlang CLI to run flows. See the CloudSlang CLI section of the documentation for instructions on how to download and run the CLI.
Although CloudSlang files can be composed in any text editor, using a modern code editor with support for syntax highlighting is recommended. See CloudSlang Editors for instructions on how to download, install and use the CloudSlang language package for Atom.
More Information¶
For more information on any of the topics covered in this tutorial, see the CloudSlang documentation.
Flows, Operations and Decisions¶
Let’s begin our study of the CloudSlang language by discussing the three types of CloudSlang executable contructs: flows, operations and decisions.
Generally, CloudSlang treats flows, operations and decisions similarly. Flows, operations and decisions can all receive inputs, produce outputs, return results and can be called by a flow’s step.
But flows, operations and decisions serve different purposes.
An operation contains an action, which can be written in Python or Java. Operations perform the “work” part of the workflow.
A flow contains steps, which stitch together the actions performed by operations (or subflows), navigating and passing data from one to the other based on operation results and outputs. Flows perform the “flow” part of the workflow.
A decision is very similar to an operation, but without an action.
Here is a diagram of the flow, operation and decision structure we will be building in this tutorial.

Setup¶
We’ll start writing CloudSlang code in the next lesson. But before we do that we’ll setup our folder structure to get ready.
Create a folder named tutorials. We’ll store our flows and operations in this folder. Since we’re going to have some general content as well as content that is specific to our use case, let’s create two subfolders under tutorials called base and hiring.
We’ll start off with just one flow and one operation. In the next two lessons we’ll create a file named new_hire.sl in the hiring folder and in the base folder we’ll create a file named print.sl. The file new_hire.sl will hold our flow and print.sl will hold our first operation.
Your file structure will look like this:
- tutorials
- base
- print.sl
- hiring
- new_hire.sl
- base
Note
If your editor requires it for syntax highlighting, you can also use the .sl.yaml and .sl.yml extensions.
Up Next¶
In the next lesson we’ll write and run our first operation.
Lesson 2 - First Operation¶
Goal¶
In this lesson we’ll write our first operation. We’ll learn the basic structure of a simple operation by writing one that simply prints out a message.
Get Started¶
First, we need to create the print.sl file in the base folder so we can
start writing the print operation.
The print
operation is as simple as they get. It just takes in a input
and prints it out using Python.
Namespace¶
All CloudSlang files start with a namespace which mirrors the folder structure in which the files are found. In our case we’ve put print.sl in the tutorials/base folder so our namespace should reflect that.
namespace: tutorials.base
The namespace can be used by flows that call this operation.
For more information, see namespace in the DSL reference.
Operation Name¶
Each operation begins with the operation
key which will map to the
contents of the operation body. The first part of that body is a
key:value pair defining the name of the operation. The name of the
operation must be the same as the name of the file it is stored in.
operation:
name: print
Note
YAML Note: Indentation is very important in YAML. It is used to
indicate scope. In the example above, you can see that
name: print
is indented under the operation
key to denote
that it belongs to the operation. Always use an identical number of
spaces to indent. Never use tabs. For more information, see the
YAML Overview.
For more information, see operation in the DSL reference.
Inputs¶
After the name, if the operation takes any inputs, they are listed under
the inputs
key. In our case we’ll need to take in the text we want
to print. We’ll name our input text
.
inputs:
- text
Note
YAML Note: The inputs
key maps to a list of inputs. In YAML, a
list is signified by prepending a hypen (-
) and a space to each
item.
The values for the inputs are either passed via the CloudSlang CLI, as we do below in this lesson, or from a step in a flow, as we will do in the next lesson.
Inputs can also have related parameters, such as required
, default
,
private
and sensitive
. We will discuss these parameters in lessons
8 - Input Parameters.
For more information, see inputs, required, default, private and sensitive in the DSL reference.
Action¶
Finally, we’ve reached the core of the operation, the action. There are two types of actions in CloudSlang, Python-based actions and Java-based actions.
We’ll start off by creating a Python action that simply prints the text
that was input. To do so, we add python_action
and script
keys that map
to the action contents.
python_action:
script: print text
Note
CloudSlang uses the Jython implementation of Python 2.7. For information on Jython’s limitations, see the Jython FAQ.
Python scripts that need 3rd party packages may import them using the procedures described in lesson 14 - 3rd Party Python Packages.
For more information, see python_action in the DSL reference.
The usage of Java-based actions is beyond the scope of this tutorial. For more information, see the java_action in the DSL reference.
Run It¶
That’s it. Our operation is all ready. Our next step will be to create a flow that uses the operation we just wrote, but we can actually just run the operation as is.
To do so, save the operation file, fire up the CloundSlang CLI and enter the following at the prompt to run your operation:
run --f <folder path>/tutorials/base/print.sl --i text=Hi
You should see the input text printed out to the screen.
For more information, see Use the CLI in the DSL reference.
Download the Code¶
Up Next¶
In the next lesson we’ll write a flow that will call the print operation.
New Code - Complete¶
print.sl
namespace: tutorials.base
operation:
name: print
inputs:
- text
python_action:
script: print text
Lesson 3 - First Flow¶
Goal¶
In this lesson we’ll write a simple flow that will call the print operation. We’ll learn about the main components that make up a flow.
Get Started¶
First, we need to create a new_hire.sl file in the hiring folder so we can start writing the new hire flow. We’ll build it one step at a time. So for now, all it will do is call the print operation we wrote in the previous lesson.
Namespace¶
Just like in our operation file, we need to start the flow file with a namespace. Since we’re storing new_hire.sl in the tutorials/hiring folder the namespace must reflect that.
namespace: tutorials.hiring
For more information, see namespace in the DSL reference.
Imports¶
After the namespace you can list the namespace of any CloudSlang files
that you will need to reference in your flow. Our flow will need to
reference the operation in print.sl, so we’ll add the namespace from
that file, tutorials.base
, to the optional imports
key. We map an alias
that we will use as a reference in the flow to the namespace we are importing.
Let’s call the alias base
.
imports:
base: tutorials.base
Now we can use base.print
to refer to the print
operation in a step.
We’ll do that in a moment.
For more information, see imports in the DSL reference.
For ways to refer to an operation or subflow without creating an alias, see the CloudSlang DSL Reference and the Operation Paths example.
Flow Name¶
Each flow begins with the flow
key which will map to the contents of
the flow body. The first part of that body is a key:value pair defining
the name of the flow. The name of the flow must be the same as the name
of the file it is stored in.
flow:
name: new_hire
For more information, see flow in the DSL reference.
Steps¶
The next part of our flow will be the workflow. The workflow
key
maps to a list of all the steps in the flow. We’ll start off with just
one step, the one that calls our print operation. Each step in a
workflow starts with a key that is its name. We’ll call our step
print_start
.
workflow:
- print_start:
For more information, see workflow in the DSL reference.
A step can contain several parts, but we’ll start with a simple step
with the only required part, the do
key. We want to call the print
operation. In this case we’ll reference it using the alias we created up
in the flow’s imports
section. Also, we’ll have to pass any required
inputs to the operation. In our case, there’s one input named text
which we’ll add to a list under the operation call and pass it a value.
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: SUCCESS
In addition to the required do
, a step can also contain the optional
publish
and navigate
keys. Here we added a navigate
section.
We’ll explain more about publish
and navigate
a little later in lessons
5 - Default Navigation and 7 - Custom Navigation respectively.
For more information, see do, publish and navigate in the DSL reference.
Run It¶
Now our flow is all ready to run. To do so, save the file and enter the following at the prompt.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials/base
Note
The --cp
flag is used to add folders where the flow’s
dependencies are found to the classpath. For more information, see
Run with Dependencies in the DSL reference.
You should see the name of the step and the string sent to the print operation printed to the screen.
Download the Code¶
Up Next¶
In the next lesson we’ll write a more complex operation that also returns outputs and results.
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
flow:
name: new_hire
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: SUCCESS
results:
- SUCCESS
Lesson 4 - Outputs and Results¶
Goal¶
In this lesson we’ll write a bit of a more complex operation that returns an output and results.
Get Started¶
Let’s create a new file in the tutorials/hiring folder named check_availability.sl in which we’ll house an operation to check whether a given email address is available.
We’ll also start off our new operation in much the same way we did with
the print
operation. We’ll put in a namespace
, the operation
key, the name of the operation and an input.
namespace: tutorials.hiring
operation:
name: check_availability
inputs:
- address
Action¶
This time we’ll have a little more of a complex action. The idea here is
to simulate checking the availability of the given address. We’ll import
and use the Python random
module to get a random number between 0
and 2. If the random number we get is 0 we’ll say the requested email
address is already taken.
We’ve added a commented-out line, using a Python comment (#
) to
print the random number that was generated. We can uncomment this line
during testing to see that our operation is working as expected.
python_action:
script: |
import random
rand = random.randint(0, 2)
vacant = rand != 0
#print rand
Note
YAML Note: Since we’re writing a multi-line Python script here we
use the pipe (|
) character to denote the usage of literal style
block notation where all newlines will be preserved.
Outputs¶
In the outputs section we put any information we want to send back to the
calling flow. In our case, we want to return whether the requested address was
already taken. The outputs are a list of key:value pairs where the key is the
name of the output and the value is the expression to be returned. In our case,
we’ll just return the value in the vacant
variable. Outputs must be strings
so we’ll use the Python str()
function to convert the value.
outputs:
- available: ${str(vacant)}
Notice the special ${}
syntax. This indicates that what is inside the braces
is a CloudSlang expression. If we would have just written str(vacant)
, it
would be understood as a string literal. We’ll see this syntax in action again
in a few moments.
For more information, see Expressions in the DSL reference.
At this point we won’t be using the output value, but we will soon
enough. In lesson 5 - Default Navigation we publish
the available
output and use it in another step.
For more information, see outputs in the DSL reference.
Results¶
The last section of our operation defines the results we return to the
calling flow. The results are used by the navigation of the calling
flow. We’ll start by using the default result values, SUCCESS
and
FAILURE
. If the email address was available, we’ll return a result
of SUCCESS
, otherwise we’ll return a result of FAILURE
. There always
must be a default ending result that does not have an expression or explicitly
maps to the value true
. Here, we will use the SUCCESS
result as our
catchall. When the operation is run, the first result whose expression is true
or empty is returned. It is therefore important to take care in the ordering of
the results.
results:
- FAILURE: ${rand == 0}
- SUCCESS
The results are used by the calling flow for navigation purposes. You can see the default navigation rules in action in lessons 5 - Default Navigation and 6 - Handling Failure Results. And you can learn how to create custom navigation in lesson 7 - Custom Navigation.
For more information, see results in the DSL reference.
Run It¶
Let’s save and run this operation by itself before we start using it in our flow to make sure everything is working properly. (You might want to uncomment the line that prints out the random number while testing.) To run the operation, enter the following in the CLI:
run --f <folder path>/tutorials/hiring/check_availability.sl --i address=john.doe@somecompany.com
Run the operation a few times and make sure that both the SUCCESS
and FAILURE
cases are working as expected.
Download the Code¶
Up Next¶
In the next lesson we’ll integrate our new operation into our flow, using the output and results it sends.
New Code - Complete¶
check_availability.sl
namespace: tutorials.hiring
operation:
name: check_availability
inputs:
- address
python_action:
script: |
import random
rand = random.randint(0, 2)
vacant = rand != 0
#print rand
outputs:
- available: ${str(vacant)}
results:
- FAILURE: ${rand == 0}
- SUCCESS
Lesson 6 - Handling Failure Results¶
Goal¶
In this lesson we’ll learn one strategy for handling results of
FAILURE
using the default navigation.
Get Started¶
Let’s continue where we left off in the new_hire.sl flow and add
some code to deal with the case when the check_availability
operation returns a result of FAILURE
.
Failure Handling¶
There is special syntax that can be used for handling FAILURE
results by default. We wrap a step inside the on_failure
key. Let’s
add this functionality after the print_finish
step.
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address: ' + address}"
Now, when any step receives a result of FAILURE
from its operation
the flow will navigate to the on_failure
step by default.
For more information, see on_failure in the DSL reference.
Run It¶
We can save and run the flow using the exact command we used in the last lesson. This time, however, things should work slightly differently.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials --i address=john.doe@somecompany.com
In the case of the check_availability
operation returning a result
of SUCCESS
we expect the flow to behave exactly as it did before.
Notice that this means it will know not to run the on_failure
step
without us adding any navigation instructions. This is part of the
default navigation behavior.
In the case of the check_availability
operation returning a result
of FAILURE
the flow will no longer terminate immediately with a
result of FAILURE
. Instead, the flow will continue by running the
on_failure
step, which in our case prints out an error message.
Download the Code¶
Up Next¶
In the next lesson we’ll see how to achieve a similar outcome using custom navigation.
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
flow:
name: new_hire
inputs:
- address
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: check_address
- check_address:
do:
hiring.check_availability:
- address
publish:
- availability: ${available}
- print_finish:
do:
base.print:
- text: "${'Availability for address ' + address + ' is: ' + availability}"
navigate:
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address: ' + address}"
Lesson 8 - Input and Output Parameters¶
Goal¶
In this lesson we’ll learn how to change the way inputs and outputs behave using input and output properties.
Get Started¶
It’s time to create a new operation. Create a new file in the
tutorials/hiring folder called generate_user_email.sl. In here
we’ll create an operation that takes in some user information and
produces an email address for that user. We’ll write it so that it will
take in which attempt this is at creating an email address for this
user. That way we can use it in conjunction with our
check_availability
operation. Eventually, we’ll generate an address,
check its availability, and if it’s unavailable we’ll do it all over
again. The following code does not present any new concepts. We will use
it as a starting point for a discussion on input properties.
namespace: tutorials.hiring
operation:
name: generate_user_email
inputs:
- first_name
- middle_name
- last_name
- domain
- attempt
python_action:
script: |
attempt = int(attempt)
if attempt == 1:
address = first_name[0:1] + '.' + last_name + '@' + domain
elif attempt == 2:
address = first_name + '.' + last_name[0:1] + '@' + domain
elif attempt == 3 and middle_name != '':
address = first_name + '.' + middle_name[0:1] + '.' + last_name + '@' + domain
else:
address = ''
# print address
outputs:
- email_address: ${address}
results:
- FAILURE: ${address == ''}
- SUCCESS
Test¶
You can save the file and test that the operation is working as expected by using the following command:
run --f <folder path>/tutorials/hiring/generate_user_email.sl --i first_name=john,middle_name=e,last_name=doe,domain=somecompany,attempt=1
It may help to uncomment the print line to see what is being output.
Change the value for attempt
in the run command and see what
happens.
Add to Flow¶
Let’s add a step in the new_hire flow to call our new operation. That will allow us to demonstrate how input properties affect the way variables are passed to operations.
Between the print_start
step and check_address
step we’ll put
our new step named generate_address
.
- generate_address:
do:
generate_user_email:
- first_name
- middle_name
- last_name
- domain
- attempt
publish:
- address: ${email_address}
We’ll also have to change the inputs of the flow to reflect our new
addition. We can remove the address
from the flow inputs since we’ll
now be getting the address from the generate_user_email
operation
and publishing it in the generate_address
step. Instead, we need to
add the inputs necessary for the generate_user_email
operation to
the flow’s inputs section.
inputs:
- first_name
- middle_name
- last_name
- domain
- attempt
We also have to fix the navigation of the print_start
step.
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: generate_address
One last thing to tidy up is the failure message, which no longer receives an address that was not created.
- on_failure:
- print_fail:
do:
base.print:
- text: "Failed to create address"
At this point everything is set up to go. We can save the file and run the flow as long as we pass all the necessary arguments.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials --i first_name=john,middle_name=e,last_name=doe,domain=somecompany.com,attempt=1
Required¶
By default all flow and operation inputs are required. We can change
that behavior by setting the required
property of an input to false.
Let’s make the middle_name
optional. We’ll have to set its
required
property to false
in both the flow’s inputs and the
generate_user_email
operation’s inputs.
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
- domain
- attempt
operation:
name: generate_user_email
inputs:
- first_name
- middle_name:
required: false
- last_name
- domain
- attempt
Note
YAML Note: Don’t forget to add a colon (:
) to the input name
before adding its properties.
For more information, see required in the DSL reference.
Default¶
We can also make an input optional by providing a default value. If no
value is passed for an input that declares the default property, the
default value is used instead. In our case, we can set the
generate_user_email
operation’s middle_name
to default to the
empty string.
operation:
name: generate_user_email
inputs:
- first_name
- middle_name:
required: false
default: ""
- last_name
- domain
- attempt
Now the flow can be run after saving the files without providing a value for the middle name.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials --i first_name=john,last_name=doe,domain=somecompany.com,attempt=1
For more information, see default in the DSL reference.
Private¶
The default value is only used if another value is not passed to the
operation. But sometimes we want to force the default value to be the
one used, even if a different value is passed from a flow. Let’s do that
to the domain
input of the generate_user_email
operation. To do
so, we set the input’s private
parameter to true
. We’ll also
have to set a default value for the input.
operation:
name: generate_user_email
inputs:
- first_name
- middle_name:
required: false
default: ""
- last_name
- domain:
default: "acompany.com"
private: true
- attempt
We can save the file and then run the flow using the same command as
above. You’ll notice that no matter what is passed to the domain
input, acompany.com
is what ends up in the email address. That’s
exactly what we want, but obviously there is no reason to pass values to
the domain variable anymore. So let’s just remove it from the flow
inputs and the generate_address
step.
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
- attempt
- generate_address:
do:
generate_user_email:
- first_name
- middle_name
- last_name
- attempt
publish:
- address: ${email_address}
For more information, see private in the DSL reference.
Sensitive¶
Finally, we can mark both inputs and outputs as sensitive
. When a variable
is marked as sensitive, its value will not be printed in logs, events or in
outputs of the CLI and Build Tool.
In the check_availability
operation, let’s create a temporary password if
an email address is available. We’ll just add a few lines to our script to
randomly generate a short password if the address is available. In the
outputs
section, we’ll mark that password as sensitive
. Notice,
that when we add a sensitive
property to an output we have to add a
value
property as well.
python_action:
script: |
import random
rand = random.randint(0, 2)
vacant = rand != 0
# print vacant
if vacant == True:
password = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(6))
else:
password = ''
outputs:
- available: ${str(vacant)}
- password:
value: ${password}
sensitive: true
You can now run the check_availability
operation and see how the password
output is not printed to the screen.
run --f <folder path>/tutorials/hiring/check_availability.sl --i address=john.doe@somecompany.com
In the new_hire
flow we’ll add password
to the publish
section of
the check_address
step to be used later on.
- check_address:
do:
check_availability:
- address
publish:
- availability: ${available}
- password
navigate:
- UNAVAILABLE: print_fail
- AVAILABLE: print_finish
For more information, see sensitive in the DSL reference.
Run It¶
Now we can save the file and run the flow without passing the domain. We can also leave out the middle name if we want, but we can also leave it in.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials --i first_name=john,last_name=doe,attempt=1
Download the Code¶
Up Next¶
In the next lesson we’ll see how to use subflows.
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
- attempt
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: generate_address
- generate_address:
do:
generate_user_email:
- first_name
- middle_name
- last_name
- attempt
publish:
- address: ${email_address}
- check_address:
do:
check_availability:
- address
publish:
- availability: ${available}
- password
navigate:
- UNAVAILABLE: print_fail
- AVAILABLE: print_finish
- print_finish:
do:
base.print:
- text: "${'Availability for address ' + address + ' is: ' + availability}"
navigate:
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "Failed to create address"
generate_user_email.sl
namespace: tutorials.hiring
operation:
name: generate_user_email
inputs:
- first_name
- middle_name:
required: false
default: ""
- last_name
- domain:
default: "acompany.com"
private: true
- attempt
python_action:
script: |
attempt = int(attempt)
if attempt == 1:
address = first_name[0:1] + '.' + last_name + '@' + domain
elif attempt == 2:
address = first_name + '.' + last_name[0:1] + '@' + domain
elif attempt == 3 and middle_name != '':
address = first_name + '.' + middle_name[0:1] + '.' + last_name + '@' + domain
else:
address = ''
# print address
outputs:
- email_address: ${address}
results:
- FAILURE: ${address == ''}
- SUCCESS
check_availability.sl
namespace: tutorials_08.hiring
operation:
name: check_availability
inputs:
- address
python_action:
script: |
import random
import string
rand = random.randint(0, 2)
vacant = rand != 0
# print vacant
if vacant == True:
password = ''.join(random.choice(string.letters) for _ in range(6))
else:
password = ''
outputs:
- available: ${str(vacant)}
- password:
value: ${password}
sensitive: true
results:
- UNAVAILABLE: ${rand == 0}
- AVAILABLE
Lesson 9 - Subflows¶
Goal¶
In this lesson we’ll learn how to use subflows.
Get Started¶
We’ll start by creating a new file in the tutorials/hiring folder called create_user_email.sl to hold our subflow. A subflow is a flow itself and therefore it follows all the regular flow syntax.
Move Code¶
The first thing we’ll do is steal a bunch of the code that currently sits in
new_hire.sl. Let’s take everything up until the workflow
key
and copy it into the new flow and make a couple of changes. First, we
won’t need the imports, so we can just delete them. Next, we’ll change
the name of the flow to create_user_email
. That should do it for
this section.
namespace: tutorials.hiring
flow:
name: create_user_email
inputs:
- first_name
- middle_name:
required: false
- last_name
- attempt
Next let’s create a workflow
section and copy the
generate_address
and check_address
steps into it.
workflow:
- generate_address:
do:
generate_user_email:
- first_name
- middle_name
- last_name
- attempt
publish:
- address: ${email_address}
- check_address:
do:
check_availability:
- address
publish:
- availability: ${available}
- password
navigate:
- UNAVAILABLE: print_fail
- AVAILABLE: print_finish
Fix Up Subflow¶
Now we have to reroute our navigation, add flow outputs and flow results.
Let’s start with adding the flow results. We’ll have our flow return one of three result options.
CREATED
- everything went smoothly and a new, available address was createdUNAVAILABLE
- an address was generated, but it wasn’t availableFAILURE
- an address was not even generated.
results:
- CREATED
- UNAVAILABLE
- FAILURE
Now we can reroute the steps’ navigation to point to the flow results we just defined.
For the generate_address
step, whose operation returns SUCCESS
or FAILURE
, we can route SUCCESS
to the next step and
FAILURE
to the FAILURE
result of the flow.
- generate_address:
do:
generate_user_email:
- first_name
- middle_name
- last_name
- attempt
publish:
- address: ${email_address}
navigate:
- SUCCESS: check_address
- FAILURE: FAILURE
For the check_address
step, whose operation returns UNAVAILABLE
or AVAILABLE
, we can route UNAVAILABLE
to the UNAVAILABLE
result of the flow and AVAILABLE
to the CREATED
result of the
flow.
- check_address:
do:
check_availability:
- address
publish:
- availability: ${available}
- password
navigate:
- UNAVAILABLE: UNAVAILABLE
- AVAILABLE: CREATED
Finally, we can pass along the outputs published in the steps as flow outputs.
outputs:
- address
- password
- availability
Test It¶
At this point the subflow is ready and we can test it by running it as
we would any other flow. Save the file and run it a few times while
playing with the attempt
input to make sure all three possible
results are being returned at some point.
run --f <folder path>/tutorials/hiring/create_user_email.sl --cp <folder path>/tutorials --i first_name=john,last_name=doe,attempt=1
Fix Up Parent Flow¶
Finally, let’s make changes to our original flow so that it makes use of the subflow we just created.
First let’s replace the two steps we took out with one new step that calls the subflow instead of an operation. You may have noticed that both flows and operations take inputs, return outputs and return results. That allows us to use them almost interchangeably. We’ve run both flows and operations using the CLI. Now we see that we can call them both from steps as well.
Delete the generate_address
and check_address
steps. We’ll now replace
them with a new step called create_email_address
. It will pass along the
flow inputs, publish the necessary outputs and wire up the appropriate
navigation.
- create_email_address:
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt
publish:
- address
- password
navigate:
- CREATED: print_finish
- UNAVAILABLE: print_fail
- FAILURE: print_fail
All that’s left now is to change the text of the messages sent in the
print_finish
and print_fail
steps to better reflect what is
happening.
- print_finish:
do:
base.print:
- text: "${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name}"
navigate:
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address for: ' + first_name + ' ' + last_name}"
Run It¶
Now we can save the files and run the parent flow, which will also run
the subflow. Once again, you should run it a few times and play with the
attempt
input to make sure all the possible outcomes are occurring
at some point.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials --i first_name=john,last_name=doe,attempt=1
Download the Code¶
Up Next¶
In the next lesson we’ll change our new step to include a loop which will retry the email creation several times if necessary.
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
- attempt
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: create_email_address
- create_email_address:
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt
publish:
- address
- password
navigate:
- CREATED: print_finish
- UNAVAILABLE: print_fail
- FAILURE: print_fail
- print_finish:
do:
base.print:
- text: "${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name}"
navigate:
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address for: ' + first_name + ' ' + last_name}"
create_user_email
namespace: tutorials.hiring
flow:
name: create_user_email
inputs:
- first_name
- middle_name:
required: false
- last_name
- attempt
workflow:
- generate_address:
do:
generate_user_email:
- first_name
- middle_name
- last_name
- attempt
publish:
- address: ${email_address}
navigate:
- SUCCESS: check_address
- FAILURE: FAILURE
- check_address:
do:
check_availability:
- address
- password
publish:
- availability: ${available}
navigate:
- UNAVAILABLE: UNAVAILABLE
- AVAILABLE: CREATED
outputs:
- address
- password
- availability
results:
- CREATED
- UNAVAILABLE
- FAILURE
Lesson 10 - For Loop¶
Goal¶
In this lesson we’ll learn how to use a for loop to create an iterative step.
Get Started¶
The idea here is to continually try the create_user_email
subflow
until it either creates an available address or fails. To do so, we
should be able to leave the subflow as is and just work on the
create_email_address
step in new_hire.sl.
Loop Syntax¶
An iterative step looks very similar to a standard step that only runs
once. To transform our create_email_address
step into one that loops
we’ll add the loop
key along with a loop expression and indent the
do
and publish
sections. For now, we’ll loop over a list of
numbers.
- create_email_address:
loop:
for: attempt in [1,2,3,4]
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt: ${str(attempt)}
publish:
- address
- password
navigate:
- CREATED: print_finish
- UNAVAILABLE: print_fail
- FAILURE: print_fail
Note
YAML Note: A list can be written using bracket ([]
) notation
instead of using indentation and hyphens (-
).
For each item in our list the attempt
loop variable is assigned the
value and then passed to an iteration of the subflow call. All inputs must be
strings. Therefore we convert the attempt
value to a string using the Python
str()
function.
Since we’re assigning a value to attempt
in the loop and not using
it as flow input we can delete it from the flow’s input list.
For more information, see loop, for and publish in the DSL reference.
Default Behavior¶
We can save the file and run the flow now and see how the loop works. It won’t quite do what we want yet, but it will demonstrate what a loop’s default behavior is. Play around a bit with passing the optional middle name and not passing it to see what happens. Also try removing the last item in the loop expression’s list.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials --i first_name=john,middle_name=e,last_name=doe
The first thing you’ll notice is that the subflow is being run several
times. This is what our loop has done. Next, you’ll notice that
depending on whether you’ve passed a middle name and how big the loop
list is, different things will happen. This is due to the default
behavior of loops and our create_user_email
subflow.
By default a loop exits when either the list it is looping on has been
exhausted or the operation or sublfow called returns a result of
FAILURE
. This will explain the following cases:
case | middle_name |
list | iterations | flow result |
---|---|---|---|---|
1 | no | [1,2,3,4] | 3 | FAILURE |
2 | yes | [1,2,3,4] | 4 | FAILURE |
3 | no | [1,2,3] | 3 | FAILURE |
4 | yes | [1,2,3] | 3 | FAILURE or SUCCESS |
For all cases: For attempt
1
and 2
the
create_user_email
subflow runs it will return a result of either
CREATED
or UNAVAILABLE
because the generate_user_email
operation will return a result of SUCCESS
. Since neither of those
are FAILURE
, the loop will continue to run.
Case 1: Since the middle_name
is not present, the
generate_user_email
operation will return a result of FAILURE
when the 3
is passed to its attempt
input. The loop exits on the
FAILURE
result by default and goes to its navigate section which
forwards it to print_fail
. Since print_fail
is the
on_failure
step, it ends the flow with a result of FAILURE
.
Case 2: This case is very similar to the previous one. The only
difference is that the generate_user_email
operation will return a
result of FAILURE
when the 4
, not 3
, is passed to its
attempt
input.
Case 3: This case is even more similar to the first case. The first case never got to the 4th iteration of the loop, so we can expect that it we removed the 4th item from the list the same thing will happen.
Case 4: This time we have a middle_name
so the
create_user_email
subflow will run successfully all three times,
returning results of either CREATED
or UNAVAILABLE
. Since
neither of those are FAILURE
, the loop will only exit when the list
is exhausted. At that point the result from the last iteration of the
step will be used by the navigation to see where the flow goes next. If
the last iteration’s result is CREATED
, the print_finish
step
will run and the flow will end with a result of SUCCESS
. If the last
iteration’s result is UNAVAILABLE
, the print_fail
step will run
and the flow will end with a result of FAILURE
.
Custom Break¶
Now that we understand what happens in the default case, let’s put in a
custom break so the loop will do what we want it to. We want the loop to
stop when we’ve either found a suitable email address or something has
gone wrong, so we’ll add a break
key with a list of results we want
to break on, which in our case is CREATED
or FAILURE
.
- create_email_address:
loop:
for: attempt in [1,2,3,4]
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt
publish:
- address
- password
break:
- CREATED
- FAILURE
navigate:
- CREATED: print_finish
- UNAVAILABLE: print_fail
- FAILURE: print_fail
In a case where we want the loop to continue no matter what happens, we
would have to override the default break on a result of failure by
mapping the break
key to an empty list ([]
).
The published address
variable will contain the address
value
from the last iteration of the loop. We can use at the same way
published variables are used in regular steps. However, when using
loops, you often want to aggregate the published output. We will do that
in the next lesson.
For more information, see break in the DSL reference.
List Types¶
One last thing we can change to improve our flow is the loop’s list.
Right now we’re using a literal list, but we can use any Python
expression that results in a list instead. So here we can substitute
[1,2,3,4]
with range(1,5)
. We could also use a comma delimited
strings which would be split automatically into a list.
Run It¶
Everything should be working as expected now. We can save our file and
run the flow with or without a middle name. To test a result of
FAILURE
it’s best not to pass a middle name and run the flow several
times.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials --i first_name=john,last_name=doe
Download the Code¶
Up Next¶
In the next lesson we’ll write another loop and aggregate the information that is output.
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: create_email_address
- create_email_address:
loop:
for: attempt in range(1,5)
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt: ${str(attempt)}
publish:
- address
- password
break:
- CREATED
- FAILURE
navigate:
- CREATED: print_finish
- UNAVAILABLE: print_fail
- FAILURE: print_fail
- print_finish:
do:
base.print:
- text: "${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name}"
navigate:
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address for: ' + first_name + ' ' + last_name}"
Lesson 11 - Loop Aggregation¶
Goal¶
In this lesson we’ll learn how to aggregate output from a loop.
Get Started¶
We’ll create a new step to simulate ordering equipment. Internally it will randomly decide whether a piece of equipment is available or not. Then we’ll run that step in a loop from the main flow and record the cost of the ordered equipment and which items were unavailable. Create a new file named order.sl in the tutorials/hiring folder to house the new operation we’ll write and get the new_hire.sl file ready because we’ll need to add a step to the main flow.
Operation¶
The order
operation, as we’ll call it, looks very similar to our
check_availability
operation. It uses a random number to simulate
whether a given item is available. If the item is available, it will
return the amount spent
as one output and the not_ordered
output will be empty. If the item is unavailable, it will return 0
for the spent
output and the name of the item in the not_ordered
output.
namespace: tutorials.hiring
operation:
name: order
inputs:
- item
- price
python_action:
script: |
print 'Ordering: ' + item
import random
rand = random.randint(0, 2)
available = rand != 0
not_ordered = item + ';' if rand == 0 else ''
spent = 0 if rand == 0 else price
if rand == 0: print 'Unavailable'
outputs:
- not_ordered
- spent: ${spent}
results:
- UNAVAILABLE: ${rand == 0}
- AVAILABLE
Step¶
Now let’s go back to our flow and create a step, between
create_email_address
and print_finish
, to call our operation in
a loop. This time we’ll loop through a map of items and their prices, named,
order_map
that we’ll define at the flow level in a few moments. We use the
Python eval()
function to turn a string into a Python dictionary that we can
loop over.
- get_equipment:
loop:
for: item, price in eval(order_map)
do:
order:
- item
- price: ${str(price)}
- missing: ${all_missing}
- cost: ${total_cost}
break: []
Notice the missing
and cost
variables. These are not for inputs in the
order
operation. That operation only takes the item
and price
inputs. We will be using missing
and cost
together with some flow-level
variables to perform the loop aggregation.
Also notice how we’ve added a break
which maps to an empty list of break
results. This is necessary because the order
operation does not contain a
result of FAILURE
which is the default for breaking out of a loop.
Now let’s create those flow-level variables in the flow’s inputs
section.
Each time through the loop we want to aggregate the data that the order
operation outputs. We’ll create two variables, all_missing
and
total_cost
, for this purpose, defining them as private
and giving
them default values to start with.
Also, we’ll declare another variable called order_map
that will contain the
map we’re looping on.
inputs:
- first_name
- middle_name:
required: false
- last_name
- all_missing:
default: ""
required: false
private: true
- total_cost:
default: '0'
private: true
- order_map:
default: '{"laptop": 1000, "docking station": 200, "monitor": 500, "phone": 100}'
Now we can perform the aggregation. In the get_equipment
step’s publish
section, we’ll add the values output from the order
operation
(not_ordered
and spent
) to the step arguments we just created in
the get_equipment
step (missing
and cost
) and publish them back to
the flow-level variables (all_missing
and total_cost
). This will run for
each iteration after the operation has completed, aggregating all the
data. For example, each time through the loop the cost
is updated with the
current total_cost
. Then the order
operation runs and a spent
value
is output. That spent
value is added to the step’s cost
variable and
published back into the flow-level total_cost
for each iteration of the
get_equipment
step.
publish:
- all_missing: ${missing + not_ordered}
- total_cost: ${str(int(cost) + int(spent))}
Finally we have to rewire all the navigation logic to take into account our new step.
We need to change the create_email_address
step to forward
successful email address creations to get_equipment
.
navigate:
- CREATED: get_equipment
- UNAVAILABLE: print_fail
- FAILURE: print_fail
And we need to add navigation to the get_equipment
step. We’ll
always go to print_finish
no matter what happens.
navigate:
- AVAILABLE: print_finish
- UNAVAILABLE: print_finish
Finish¶
The last thing left to do is print out a finish message that also reflects the status of the equipment order.
- print_finish:
do:
base.print:
- text: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '\n' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost}
navigate:
- SUCCESS: SUCCESS
Run It¶
We can save the files, run the flow and see that the ordering takes place, the proper information is aggregated and then it is printed.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials --i first_name=john,middle_name=e,last_name=doe
Download the Code¶
Up Next¶
In the next lesson we’ll see how to write a decision.
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
- all_missing:
default: ""
required: false
private: true
- total_cost:
default: '0'
private: true
- order_map:
default: '{"laptop": 1000, "docking station":200, "monitor": 500, "phone": 100}'
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: create_email_address
- create_email_address:
loop:
for: attempt in range(1,5)
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt: ${str(attempt)}
publish:
- address
- password
break:
- CREATED
- FAILURE
navigate:
- CREATED: get_equipment
- UNAVAILABLE: print_fail
- FAILURE: print_fail
- get_equipment:
loop:
for: item, price in eval(order_map)
do:
order:
- item
- price: ${str(price)}
- missing: ${all_missing}
- cost: ${total_cost}
publish:
- all_missing: ${missing + not_ordered}
- total_cost: ${str(int(cost) + int(spent))}
break: []
navigate:
- AVAILABLE: check_min_reqs
- UNAVAILABLE: check_min_reqs
- check_min_reqs:
do:
base.contains:
- container: ${all_missing}
- sub: 'laptop'
navigate:
- DOES_NOT_CONTAIN: print_finish
- CONTAINS: print_warning
- print_warning:
do:
base.print:
- text: >
${first_name + ' ' + last_name +
' did not receive all the required equipment'}
navigate:
- SUCCESS: print_finish
- print_finish:
do:
base.print:
- text: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '\n' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost}
navigate:
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address for: ' + first_name + ' ' + last_name}"
order.sl
namespace: tutorials.hiring
operation:
name: order
inputs:
- item
- price
python_action:
script: |
print 'Ordering: ' + item
import random
rand = random.randint(0, 2)
available = rand != 0
not_ordered = item + ';' if rand == 0 else ''
price = 0 if rand == 0 else price
if rand == 0: print 'Unavailable'
outputs:
- not_ordered
- spent: ${str(spent)}
results:
- UNAVAILABLE: ${rand == 0}
- AVAILABLE
Lesson 12 - Decisions¶
Goal¶
In this lesson we’ll write a decision. We’ll learn the how to use a decision by creating one that will determine if a given requirement is met.
Get Started¶
First, we’ll create a contains.sl file in the base folder. As we’ll
see, a decision is very similar to an operation. The only real difference is
that it cannot contain an action
.
The contains
decision will determine if a given substring is contained
within a given container string.
Decision¶
From what we already know, a decision should be pretty self explanatory. So let’s just dive in.
namespace: tutorials.hiring
decision:
name: contains
inputs:
- container:
default: ""
required: false
- sub
results:
- CONTAINS: ${container.find(sub) >= 0}
- DOES_NOT_CONTAIN
Just about everything above should be familiar. The only new thing is the
decision
keyword which replaces what would have been the operation
keyword in an operation. Other than that, the decision has a namespace
,
name
, inputs
and results
. A decision can have outputs
as well,
but we don’t use them here.
In terms of function, the decision returns a result of CONTAINS
when sub
is found in container
and a result of DOES_NOT_CONTAIN
otherwise.
Call from Flow¶
Now let’s call the decision from a flow. Unsurprisingly, a decision is called in the exact same way an operation or subflow would be called.
In new_hire.sl we’ll add a step right after get_equipment
and call it
check_min_reqs
. That step will call our decision and navigate accordingly.
- check_min_reqs:
do:
base.contains:
- container: ${all_missing}
- sub: 'laptop'
navigate:
- DOES_NOT_CONTAIN: print_finish
- CONTAINS: print_warning
We pass the all_missing
string to the decision to check if it contains the
word 'laptop'
. We’ll say the if the new hire didn’t get a laptop we need to
print a warning.
Clean Up¶
Finally, to get everything working properly we need to reroute the navigation of
get_equipment
add a print_warning
step.
The get_equipment
navigation should now always point to check_min_reqs
.
navigate:
- AVAILABLE: check_min_reqs
- UNAVAILABLE: check_min_reqs
And we’ll add a simple print_warning
step.
- print_warning:
do:
base.print:
- text: >
${first_name + ' ' + last_name +
' did not receive all the required equipment'}
navigate:
- SUCCESS: print_finish
Now let’s review the possible scenarios.
- A laptop was ordered:
get_equipment
navigates tocheck_min_reqs
which returns a result ofDOES_NOT_CONTAIN
, therefore navigating toprint_finish
and then ending the flow. The output is exactly as it was before. - A laptop was not ordered:
get_equipment
navigates tocheck_min_reqs
which returns a result ofCONTAINS
, therefore navigating toprint_warning
and thenprint_finish
by default navigation and finally ending the flow. The output is as it was before, plus the warning is printed.
Run It¶
We can save the files and run the flow a few times to see that the warning is printed when appropriate and nothing is changed otherwise.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials --i first_name=john,middle_name=e,last_name=doe
Download the Code¶
Up Next¶
In the next lesson we’ll see how to use existing content in your flows.
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
- all_missing:
default: ""
required: false
private: true
- total_cost:
default: '0'
private: true
- order_map:
default: '{"laptop": 1000, "docking station": 200, "monitor": 500, "phone": 100}'
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: create_email_address
- create_email_address:
loop:
for: attempt in range(1,5)
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt: ${str(attempt)}
publish:
- address
- password
break:
- CREATED
- FAILURE
navigate:
- CREATED: get_equipment
- UNAVAILABLE: print_fail
- FAILURE: print_fail
- get_equipment:
loop:
for: item, price in eval(order_map)
do:
order:
- item
- price: ${str(price)}
- missing: ${all_missing}
- cost: ${total_cost}
publish:
- all_missing: ${missing + not_ordered}
- total_cost: ${str(int(cost) + int(spent))}
break: []
navigate:
- AVAILABLE: check_min_reqs
- UNAVAILABLE: check_min_reqs
- check_min_reqs:
do:
base.contains:
- container: ${all_missing}
- sub: 'laptop'
navigate:
- DOES_NOT_CONTAIN: print_finish
- CONTAINS: print_warning
- print_warning:
do:
base.print:
- text: >
${first_name + ' ' + last_name +
' did not receive all the required equipment'}
navigate:
- SUCCESS: print_finish
- print_finish:
do:
base.print:
- text: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '\n' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost}
navigate:
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address for: ' + first_name + ' ' + last_name}"
contains.sl
namespace: tutorials.hiring
decision:
name: contains
inputs:
- container:
default: ""
required: false
- sub
results:
- DOES_NOT_CONTAIN: ${container.find(sub) == -1}
- CONTAINS
Lesson 13 - Existing Content¶
Goal¶
In this lesson we’ll learn how to easily integrate ready-made content into our flow.
Get Started¶
Instead of printing that our flow has completed, let’s send an email to HR to let them know that the new hire’s email address has been created and notify them as to the status of the new hire’s equipment order. If you’re using a pre-built CLI you’ll have a folder named content that contains all of the ready-made content. If you’ve built the CLI from the source code, you’ll have to get the content mentioned below from the GitHub repository and point to the right location when running the flow.
Ready-Made Operation¶
We’ll use the send_mail
operation which is found in the
base/mail folder. All ready-made content begins with a commented
explanation of its purpose and its inputs, outputs and results.
Here’s the documentation for the send_mail
operation:
####################################################
#!!
#! @description: Sends an email.
#!
#! @input hostname: email host
#! @input port: email port
#! @input from: email sender
#! @input to: email recipient
#! @input cc: cc recipient
#! optional
#! default: none
#! @input bcc: bcc recipient
#! optional
#! default: none
#! @input subject: email subject
#! @input body: email text
#! @input html_email: html formatted email
#! optional
#! default: true
#! @input read_receipt: request read receipt
#! optional
#! default: false
#! @input attachments: email attachments
#! optional
#! default: none
#! @input username: account username
#! optional
#! default: none
#! @input password: account password
#! optional
#! default: none
#! @input character_set: email character set
#! optional
#! default: UTF-8
#! @input content_transfer_encoding: email content transfer encoding
#! optional
#! default: base64
#! @input delimiter: delimiter to separate email recipients and attachments
#! optional
#! default: none
#! @result SUCCESS: mail was sent successfully (returnCode is equal to 0)
#! @result FAILURE: otherwise
#!!#
####################################################
We could get this information by opening the operation from the ready-made
content folder or by running inspect
on the flow.
inspect <content folder path>/io/cloudslang/base/mail/send_mail.sl
When calling the operation, we’ll need to pass values for all the arguments listed in the documentation that are not optional.
Imports¶
First, we’ll need to set up an import alias for the new operation since it doesn’t reside where our other operations and subflows do.
imports:
base: tutorials.base
mail: io.cloudslang.base.mail
For more information, see imports in the DSL reference.
Step¶
Then, all we really need to do is create a step in our flow that will
call the send_mail
operation. Let’s put it right after the
print_finish
operation. We need to pass a host, port, from, to,
subject and body. You’ll need to substitute the values in angle brackets
(<>
) to work for your email host. Notice that the body value is
taken directly from the print_finish
step with two slight changes. First, we
turned the \n
into a <br>
since the html_email
input defaults to
true. Second, we added the temporary password published by the
create_email_address
step.
- send_mail:
do:
mail.send_mail:
- hostname: "<host>"
- port: "<port>"
- from: "<from>"
- to: "<to>"
- subject: "${'New Hire: ' + first_name + ' ' + last_name}"
- body: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '<br>' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + str(total_cost) + '<br>' +
'Temporary password: ' + password}
navigate:
- FAILURE: FAILURE
- SUCCESS: SUCCESS
Run It¶
We can save the files, run the flow and check that an email was sent with the proper information.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials,<content folder path>/io/cloudslang/base --i first_name=john,last_name=doe
Download the Code¶
Up Next¶
In the next lesson we’ll see how to use system properties to send values to input variables.
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
mail: io.cloudslang.base.mail
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
- all_missing:
default: ""
required: false
private: true
- total_cost:
default: '0'
private: true
- order_map:
default: '{"laptop": 1000, "docking station": 200, "monitor": 500, "phone": 100}'
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: create_email_address
- create_email_address:
loop:
for: attempt in range(1,5)
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt: ${str(attempt)}
publish:
- address
- password
break:
- CREATED
- FAILURE
navigate:
- CREATED: get_equipment
- UNAVAILABLE: print_fail
- FAILURE: print_fail
- get_equipment:
loop:
for: item, price in eval(order_map)
do:
order:
- item
- price: ${str(price)}
- missing: ${all_missing}
- cost: ${total_cost}
publish:
- all_missing: ${missing + not_ordered}
- total_cost: ${str(int(cost) + int(spent))}
break: []
navigate:
- AVAILABLE: check_min_reqs
- UNAVAILABLE: check_min_reqs
- check_min_reqs:
do:
base.contains:
- container: ${all_missing}
- sub: 'laptop'
navigate:
- DOES_NOT_CONTAIN: print_finish
- CONTAINS: print_warning
- print_warning:
do:
base.print:
- text: >
${first_name + ' ' + last_name +
' did not receive all the required equipment'}
navigate:
- SUCCESS: print_finish
- print_finish:
do:
base.print:
- text: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '\n' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost}
navigate:
- SUCCESS: send_mail
- send_mail:
do:
mail.send_mail:
- hostname: "<host>"
- port: "<port>"
- from: "<from>"
- to: "<to>"
- subject: "${'New Hire: ' + first_name + ' ' + last_name}"
- body: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '<br>' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost + '<br>' +
'Temporary password: ' + password}
navigate:
- FAILURE: FAILURE
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address for: ' + first_name + ' ' + last_name}"
Lesson 14 - System Properties¶
Goal¶
In this lesson we’ll learn how to use system properties to set the values of inputs.
Get Started¶
We’ll need to create a system properties file that contains the values we want to use for the inputs. Let’s create a properties folder under tutorials and in there create a file named bcompany.prop.sl. We’ll also need to use the system properties somewhere. We’ll use them in the new_hire.sl and generate_user_email.sl files.
System Properties File¶
A system properties file ends with the .prop.sl extension and can include a
namespace. A system properties file also contains the properties
keyword
which is mapped to a list of key:value
pairs that define system property
names and values.
Here’s what the contents of our system properties file looks like:
namespace: tutorials.properties
properties:
- domain: bcompany.com
- hostname: <host>
- port: '25'
- system_address: <test@test.com>
- hr_address: <test@test.com>
You’ll need to substitute the values in angle brackets (<>
) to work
for your email host.
Note
All system property values are interpreted as strings. So in our case,
even if the port is a numeric value, it’s value when used as a system
property will be a string representation. For example, entering a value of
25
will create a system property whose value is '25'
.
For more information, see properties in the DSL Reference and Run with System Properties in the CLI documentation.
Retrieve Values¶
Now we’ll use the system properties to place values in our inputs and step
arguments. We retrieve system property values using the get_sp()
function.
We’ll do this in two places.
Note
The get_sp()
function can also be used to retrieve system property
values in publish, output and result expressions.
First, we’ll use a system property in the inputs of generate_user_email
by calling the get_sp()
function in the default
property of the
the domain
input. The get_sp()
function will retrieve the value
associated with the property defined by the fully qualified name in its first
argument. If no such property is found, the function will return the second
argument.
inputs:
- first_name
- middle_name:
required: false
default: ""
- last_name
- domain:
default: ${get_sp('tutorials.properties.domain', 'acompany.com')}
private: true
- attempt
The second place we’ll use system properties is in the new_hire
flow. Here we’ll retrieve the system properties in the arguments of
the send_mail
step we created last lesson. We’ll use the get_sp()
function to get the hostname
, port
, from
and to
default
values from the system properties file.
- send_mail:
do:
mail.send_mail:
- hostname: ${get_sp('tutorials.properties.hostname')}
- port: ${get_sp('tutorials.properties.port')}
- from: ${get_sp('tutorials.properties.system_address')}
- to: ${get_sp('tutorials.properties.hr_address')}
- subject: "${'New Hire: ' + first_name + ' ' + last_name}"
- body: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '<br>' +
'Missing items: ' + missing + ' Cost of ordered items: ' + total_cost + '<br>' +
'Temporary password: ' + password}
navigate:
- FAILURE: FAILURE
- SUCCESS: SUCCESS
For more information, see get_sp() in the DSL Reference.
Run It¶
We can save the files and run the flow to see that the values are being taken from the system properties file we specify. If we want to swap out the values with another set, all we have to do is point to a different system properties file.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials,<content folder path>/base --i first_name=john,last_name=doe --spf <folder path>/tutorials/properties/bcompany.prop.sl
For more information on running with a system properties file, see Run with System Properties in the CLI documentation.
Download the Code¶
Up Next¶
In the next lesson we’ll see how to use 3rd Python packages in your operation’s actions.
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
mail: io.cloudslang.base.mail
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
- all_missing:
default: ""
required: false
private: true
- total_cost:
default: '0'
private: true
- order_map:
default: '{"laptop": 1000, "docking station": 200, "monitor": 500, "phone": 100}'
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: create_email_address
- create_email_address:
loop:
for: attempt in range(1,5)
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt: ${str(attempt)}
publish:
- address
- password
break:
- CREATED
- FAILURE
navigate:
- CREATED: get_equipment
- UNAVAILABLE: print_fail
- FAILURE: print_fail
- get_equipment:
loop:
for: item, price in eval(order_map)
do:
order:
- item
- price: ${str(price)}
- missing: ${all_missing}
- cost: ${total_cost}
publish:
- all_missing: ${missing + not_ordered}
- total_cost: ${str(int(cost) + int(spent))}
break: []
navigate:
- AVAILABLE: check_min_reqs
- UNAVAILABLE: check_min_reqs
- check_min_reqs:
do:
base.contains:
- container: ${all_missing}
- sub: 'laptop'
navigate:
- DOES_NOT_CONTAIN: print_finish
- CONTAINS: print_warning
- print_warning:
do:
base.print:
- text: >
${first_name + ' ' + last_name +
' did not receive all the required equipment'}
navigate:
- SUCCESS: print_finish
- print_finish:
do:
base.print:
- text: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '\n' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost}
navigate:
- SUCCESS: send_mail
- send_mail:
do:
mail.send_mail:
- hostname: ${get_sp('tutorials.properties.hostname')}
- port: ${get_sp('tutorials.properties.port')}
- from: ${get_sp('tutorials.properties.system_address')}
- to: ${get_sp('tutorials.properties.hr_address')}
- subject: "${'New Hire: ' + first_name + ' ' + last_name}"
- body: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '<br>' +
'Missing items: ' + all_missing + ' Cost of ordered items:' + total_cost + '<br>' +
'Temporary password: ' + password}
navigate:
- FAILURE: FAILURE
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address for: ' + first_name + ' ' + last_name}"
generate_user_email.sl
namespace: tutorials.hiring
operation:
name: generate_user_email
inputs:
- first_name
- middle_name:
required: false
default: ""
- last_name
- domain:
default: ${get_sp('tutorials.properties.domain', 'acompany.com')}
private: true
- attempt
python_action:
script: |
attempt = int(attempt)
if attempt == 1:
address = first_name[0:1] + '.' + last_name + '@' + domain
elif attempt == 2:
address = first_name + '.' + first_name[0:1] + '@' + domain
elif attempt == 3 and middle_name != '':
address = first_name + '.' + middle_name[0:1] + '.' + last_name + '@' + domain
else:
address = ''
#print address
outputs:
- email_address: ${address}
results:
- FAILURE: ${address == ''}
- SUCCESS
bcompany.prop.sl
namespace: tutorials.properties
properties:
- domain: bcompany.com
- hostname: <host>
- port: '25'
- system_address: <test@test.com>
- hr_address: <test@test.com>
Note
You need to substitute the values in angle brackets (<>) to work for your email host.
Lesson 15 - 3rd Party Python Packages¶
Goal¶
In this lesson we’ll learn how to import 3rd party Python packages to
use in an operation’s python_action
.
Get Started¶
In this lesson we’ll be installing a 3rd party Python package. In order to do so you’ll need to have Python and pip installed on your machine. You can download Python (version 2.7) from here. Python 2.7.9 and later include pip by default. If you already have Python but don’t have pip installed on your machine, see the pip documentation for installation instructions.
We’ll also need to add a requirements.txt file to a python-lib folder which is at the same level as the bin folder that the CLI executable resides in. If you downloaded a pre-built CLI the requirements.txt file is already there and we will be appending to its contents.
The folder structure where the CLI executable is should look something like this (other folders omitted for simplicity):
- cslang-cli
- bin
- cslang
- cslang.bat
- content
- CloudSlang ready-made content
- lib
- includes all the Java .jar files for the CLI
- python-lib
- requirements.txt
- bin
And finally, we’ll need a new file, fancy_text.sl in the tutorials/hiring folder, to house a new operation.
Requirements¶
In the requirements.txt file we’ll list all the Python packages we need for our project. In our case we’ll add a package that will allow us to create large lettered strings using ordinary screen characters. The package is called pyfiglet. A quick search on PyPI tells us that the current version (at the time this tutorial was written) is 0.7.2, so we’ll use that one. We also need to install setuptools since pyfiglet depends on it. Each package we need takes up one line in our requirements.txt file.
setuptools
pyfiglet == 0.7.2
Installing¶
Now we need to use pip to download and install our packages.
To do so, run the following command from the python-lib directory:
pip install -r requirements.txt -t .
Note
If your machine is behind a proxy you’ll need to specify the proxy
using pip’s --proxy
flag.
If everything has gone well, you should now see the pyfiglet package’s files in the python-lib folder along with the setuptools files.
Operation¶
Next, let’s write an operation that will let us turn normal text into
something fancy using pyfiglet. All we need to do is import
pyfiglet as we would normally do in Python and use it. We also have
to do a little bit of work to turn the regular string we get from
calling renderText
into something that will look right in our HTML
email.
namespace: tutorials.hiring
operation:
name: fancy_text
inputs:
- text
python_action:
script: |
from pyfiglet import Figlet
f = Figlet(font='slant')
fancy = '<pre>' + f.renderText(text).replace('\n','<br>').replace(' ', ' ') + '</pre>'
outputs:
- fancy
Note
CloudSlang uses the Jython implementation of Python 2.7. For information on Jython’s limitations, see the Jython FAQ.
Step¶
Now we can create a step in the new_hire
flow to send some text to
the fancy_text
operation and publish the output so we can use it in
our email. We’ll put the new step between print_finish
and
send_mail
.
- fancy_name:
do:
fancy_text:
- text: ${first_name + ' ' + last_name}
publish:
- fancy_text: ${fancy}
navigate:
- SUCCESS: send_mail
Use It¶
Finally, we need to change the body of the email to include our new fancy text.
- send_mail:
do:
mail.send_mail:
- hostname: ${get_sp('tutorials.properties.hostname')}
- port: ${get_sp('tutorials.properties.port')}
- from: ${get_sp('tutorials.properties.system_address')}
- to: ${get_sp('tutorials.properties.hr_address')}
- subject: "${'New Hire: ' + first_name + ' ' + last_name}"
- body: >
${fancy_text + '<br>' +
'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '<br>' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost + '<br>' +
'Temporary password: ' + password}
navigate:
- FAILURE: FAILURE
- SUCCESS: SUCCESS
Run It¶
We can save the files and run the flow. When the email is sent it should include the new fancy text we added to it.
run --f <folder path>/tutorials/hiring/new_hire.sl --cp <folder path>/tutorials,<content folder path>/base --i first_name=john,last_name=doe --spf <folder path>/tutorials/properties/bcompany.prop.sl
Download the Code¶
Up Next¶
In the next lesson we’ll see how to use a parallel loop.
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
mail: io.cloudslang.base.mail
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
- all_missing:
default: ""
required: false
private: true
- total_cost:
default: '0'
private: true
- order_map:
default: '{"laptop": 1000, "docking station": 200, "monitor": 500, "phone": 100}'
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: create_email_address
- create_email_address:
loop:
for: attempt in range(1,5)
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt: ${str(attempt)}
publish:
- address
- password
break:
- CREATED
- FAILURE
navigate:
- CREATED: get_equipment
- UNAVAILABLE: print_fail
- FAILURE: print_fail
- get_equipment:
loop:
for: item, price in eval(order_map)
do:
order:
- item
- price: ${str(price)}
- missing: ${all_missing}
- cost: ${total_cost}
publish:
- all_missing: ${missing + not_ordered}
- total_cost: ${str(int(cost) + int(spent))}
break: []
navigate:
- AVAILABLE: check_min_reqs
- UNAVAILABLE: check_min_reqs
- check_min_reqs:
do:
base.contains:
- container: ${all_missing}
- sub: 'laptop'
navigate:
- DOES_NOT_CONTAIN: print_finish
- CONTAINS: print_warning
- print_warning:
do:
base.print:
- text: >
${first_name + ' ' + last_name +
' did not receive all the required equipment'}
navigate:
- SUCCESS: print_finish
- print_finish:
do:
base.print:
- text: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '\n' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost}
navigate:
- SUCCESS: fancy_name
- fancy_name:
do:
fancy_text:
- text: ${first_name + ' ' + last_name}
publish:
- fancy_text: ${fancy}
navigate:
- SUCCESS: send_mail
- send_mail:
do:
mail.send_mail:
- hostname: ${get_sp('tutorials.properties.hostname')}
- port: ${get_sp('tutorials.properties.port')}
- from: ${get_sp('tutorials.properties.system_address')}
- to: ${get_sp('tutorials.properties.hr_address')}
- subject: "${'New Hire: ' + first_name + ' ' + last_name}"
- body: >
${fancy_text + '<br>' +
'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '<br>' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost + '<br>' +
'Temporary password: ' + password}
navigate:
- FAILURE: FAILURE
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address for: ' + first_name + ' ' + last_name}"
fancy_text.sl
namespace: tutorials.hiring
operation:
name: fancy_text
inputs:
- text
python_action:
script: |
from pyfiglet import Figlet
f = Figlet(font='slant')
fancy = '<pre>' + f.renderText(text).replace('\n','<br>').replace(' ', ' ') + '</pre>'
outputs:
- fancy
Lesson 16 - Parallel Loop¶
Goal¶
In this lesson we’ll learn how to loop in parallel. When looping in parallel, a new branch is created for each value in a list and the action associated with the step is run for each branch in parallel.
Get Started¶
We’ll be creating a new flow that will call the new_hire
flow we’ve
built in previous lessons as a subflow. Let’s begin by creating a new
file named hire_all.sl in the tutorials/hiring folder for our
new flow. Also, we’ll need the new_hire.sl because we’re going to
make some minor changes to that as well. And finally, we’ll pass our
flow inputs using a file, so let’s create a tutorials/inputs folder
and add a hires.yaml file.
Outputs¶
Since we’ll be using the new_hire
flow as a subflow, it will be
helpful if we add some flow outputs for a parent flow to make use of.
We’ll simply add an outputs
section at the bottom of our flow to
output a bit of information. This outputs
section is quite a
distance from the flow
key, so be extra careful to place it at the
proper indentation.
outputs:
- address
- final_cost: ${total_cost}
Parent Flow¶
Our new hire_all
flow is going to take in a list of names of people
being hired and will call the new_hire
flow for each one of them. It
will be looping in parallel, so all the new_hire
flows will be
running simultaneously.
In hire_all.sl we can start off as usual by declaring a
namespace
, specifying the imports
and taking in the inputs
,
which in our case is a list of names.
namespace: tutorials.hiring
imports:
base: tutorials.base
flow:
name: hire_all
inputs:
- names_list
workflow:
Loop Syntax¶
A parallel loop looks pretty similar to a normal for loop, but with a few key differences.
Let’s create a new step named process_all
in which we’ll do our
looping. Each branch of the loop will call the new_hire
flow.
- process_all:
parallel_loop:
for: name in eval(names_list)
do:
new_hire:
- first_name: ${name["first"]}
- middle_name: ${name.get("middle","")}
- last_name: ${name["last"]}
As you can see, so far it is almost identical to a regular for loop,
except the loop
key has been replaced by parallel_loop
.
The names_list
input will be a list of dictionaries containing name
information with the keys first
, middle
and last
. For each
name
in names_list
the new_hire
flow will be called and
passed the corresponding name values. The various branches running the
new_hire
flow will run in parallel and the rest of the flow will
continue only after all the branches have completed.
For more information, see parallel_loop in the DSL reference.
Publish¶
Next we perform aggregation in the publish
section in a similar manner to
what we do in a normal for loop (as we did in lesson
11 - Loop Aggregation). Publish occurs only after all
branches have completed.
In most cases the publish will make use of the branches_context
list. This is a list that is populated with all of the outputs from
all of the branches. For example, in our case,
branches_context[0]
will contain keys address
and final_cost
,
corresponding to the values output by the first branch to complete. Similarly,
branches_context[1]
will contain the keys address
and final_cost
mapped to the values output by the second branch to complete.
There is no way to predict the order in which branches will complete, so
the branches_context
is rarely accessed using a particular index. Instead,
Python expressions are used to extract the desired aggregations.
- process_all:
parallel_loop:
for: name in eval(names_list)
do:
new_hire:
- first_name: ${name["first"]}
- middle_name: ${name.get("middle","")}
- last_name: ${name["last"]}
publish:
- email_list: "${', '.join(filter(lambda x : x != '', map(lambda x : str(x['address']), branches_context)))}"
- cost: "${str(sum(map(lambda x : x['final_cost'], branches_context)))}"
In our case we use the map()
, filter()
and sum()
Python
functions to create a list of all the email addresses that were created
and a sum of all the equipment costs.
Let’s look a bit closer at one of the publish aggregations to better understand
what’s going on. Each time a branch of the parallel loop is finished running the
new_hire
subflow it publishes a final_cost
value. Each of those
individual final_cost
values gets added to the branches_context
list at
index n
, where n
indicates the order the branches finish in, under the
final_cost
key. So, if we were to loop through the branches_context
we
would find at branches_context[n][final_cost]
the final_cost
value that
was published by the nth new_hire
subflow to finish running. Instead of
looping through the branches_context
, we use a Python lambda expression in
conjunction with the map
function to extract just the values of the
final_cost
from each branches_context[n][final_cost]
to a new list.
Finally, we use the Python sum
function to add up all the
extracted values in our new list and publish that value as cost
.
For more information, see publish and branches_context in the DSL reference.
For more information on the Python constructs used here, see lambda, map and sum in the Python documentation.
Input File¶
We’ll use an input file to send the flow our list of names. An input file is very similar to a system properties file. It is written in plain YAML and therefore ends with the .yaml extension.
Here is the contents of our hires.yaml input file that we created in the tutorials/inputs folder.
names_list: '[{"first": "joe", "middle": "p", "last": "bloggs"}, {"first": "jane", "last": "doe"}, {"first": "juan", "last": "perez"}]'
The file contains a names_list
key that maps to a stringified version of a
list of name information. Remember, all inputs must be strings, so here we must
use a string as well.
For more information, see Using an Inputs File in the CLI documentation.
Steps¶
Finally, we have to add the steps we referred to in the navigation
section. We can put them right after the process_all
step.
- print_success:
do:
base.print:
- text: >
${"All addresses were created successfully.\nEmail addresses created: "
+ email_list + "\nTotal cost: " + cost}
navigate:
- SUCCESS: SUCCESS
- on_failure:
- print_failure:
do:
base.print:
- text: >
${"Some addresses were not created or there is an email issue.\nEmail addresses created: "
+ email_list + "\nTotal cost: " + cost}
Run It¶
We can save the files and run the flow. It’s a bit harder to track what
has happened now because there are quite a few things happening at once.
On careful inspection you will see that each step in the new_hire
flow, and in each of its subflows, is run for each of the people in the
names_list
input.
run --f <folder path>/tutorials/hiring/hire_all.sl --cp <folder path>/tutorials,<content folder path>/base --if <folder path>/tutorials/inputs/hires.yaml --spf <folder path>/tutorials/properties/bcompany.prop.sl
Download the Code¶
New Code - Complete¶
new_hire.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
mail: io.cloudslang.base.mail
flow:
name: new_hire
inputs:
- first_name
- middle_name:
required: false
- last_name
- all_missing:
default: ""
required: false
private: true
- total_cost:
default: '0'
private: true
- order_map:
default: '{"laptop": 1000, "docking station": 200, "monitor": 500, "phone": 100}'
workflow:
- print_start:
do:
base.print:
- text: "Starting new hire process"
navigate:
- SUCCESS: create_email_address
- create_email_address:
loop:
for: attempt in range(1,5)
do:
create_user_email:
- first_name
- middle_name
- last_name
- attempt: ${str(attempt)}
publish:
- address
- password
break:
- CREATED
- FAILURE
navigate:
- CREATED: get_equipment
- UNAVAILABLE: print_fail
- FAILURE: print_fail
- get_equipment:
loop:
for: item, price in eval(order_map)
do:
order:
- item
- price: ${str(price)}
- missing: ${all_missing}
- cost: ${total_cost}
publish:
- all_missing: ${missing + not_ordered}
- total_cost: ${str(int(cost) + int(spent))}
break: []
navigate:
- AVAILABLE: check_min_reqs
- UNAVAILABLE: check_min_reqs
- check_min_reqs:
do:
base.contains:
- container: ${all_missing}
- sub: 'laptop'
navigate:
- DOES_NOT_CONTAIN: print_finish
- CONTAINS: print_warning
- print_warning:
do:
base.print:
- text: >
${first_name + ' ' + last_name +
' did not receive all the required equipment\n'}
navigate:
- SUCCESS: print_finish
- print_finish:
do:
base.print:
- text: >
${'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '\n' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost}
navigate:
- SUCCESS: fancy_name
- fancy_name:
do:
fancy_text:
- text: ${first_name + ' ' + last_name}
publish:
- fancy_text: ${fancy}
navigate:
- SUCCESS: send_mail
- send_mail:
do:
mail.send_mail:
- hostname: ${get_sp('tutorials.properties.hostname')}
- port: ${get_sp('tutorials.properties.port')}
- from: ${get_sp('tutorials.properties.system_address')}
- to: ${get_sp('tutorials.properties.hr_address')}
- subject: "${'New Hire: ' + first_name + ' ' + last_name}"
- body: >
${fancy_text + '<br>' +
'Created address: ' + address + ' for: ' + first_name + ' ' + last_name + '<br>' +
'Missing items: ' + all_missing + ' Cost of ordered items: ' + total_cost + '<br>' +
'Temporary password: ' + password}
navigate:
- FAILURE: FAILURE
- SUCCESS: SUCCESS
- on_failure:
- print_fail:
do:
base.print:
- text: "${'Failed to create address for: ' + first_name + ' ' + last_name}"
outputs:
- address
- final_cost: ${total_cost}
hire_all.sl
namespace: tutorials.hiring
imports:
base: tutorials.base
flow:
name: hire_all
inputs:
- names_list
workflow:
- process_all:
parallel_loop:
for: name in eval(names_list)
do:
new_hire:
- first_name: ${name["first"]}
- middle_name: ${name.get("middle","")}
- last_name: ${name["last"]}
publish:
- email_list: "${', '.join(filter(lambda x : x != '', map(lambda x : str(x['address']), branches_context)))}"
- cost: "${str(sum(map(lambda x : int(x['final_cost']), branches_context)))}"
navigate:
- SUCCESS: print_success
- FAILURE: print_failure
- print_success:
do:
base.print:
- text: >
${"All addresses were created successfully.\nEmail addresses created: "
+ email_list + "\nTotal cost: " + cost}
navigate:
- SUCCESS: SUCCESS
- on_failure:
- print_failure:
do:
base.print:
- text: >
${"Some addresses were not created or there is an email issue.\nEmail addresses created: "
+ email_list + "\nTotal cost: " + cost}
hires.yaml
names_list: '[{"first": "joe", "middle": "p", "last": "bloggs"}, {"first": "jane", "last": "doe"}, {"first": "juan", "last": "perez"}]'