Sparrow glue it all!
Sparrow6 is automation system written in Perl6 but extendable by many languages.
Sparrow6 is fun because it's designed to get work done, instead of implying strict workflow or certain paradigm. The cool thing about it you can glue code written on many languages into high-level scenario written on Perl6.
Perl6 gives your power of modern and expressive language, while plugin system with multi language support allows you to write some bits of a code on the language that fits best to the task being solved.
Following an example of combining Bash/Perl5 and Perl6 languages,
coming from my real work task.
Task of generating Ado pipeline code
One of my latest task was to generate Ado pipeline that has a lot variables comes from JSON files, the typical code would just contain reference to those vars:
- environment: "$(environment)"
- region: "$(region)"
- cluster_name: "$(cluster_name)"
- workernodes_amount: "$(workarounds_amount)"
# and so on
Those variables are defined somewhere else in JSON file:
{
"variables":
{
"environment": {"value":"dev"},
"region": {"value":"southcentralus"},
"cluster_name": {"value":"CHANGEME!!!"},
"workernodes_amount": {"value":"4"}
}
}
Whenever this file is updated we need to ensure that respected yaml pipeline code updated as well.
The additional problem here is we can't just parse JSON file into related object and then iterate through variables, because in that case we will loose the original order of variables presented in JSON file ( to keep the order is just an external requirement for the pipeline code itself ), so we have to use Perl6 regexps patterns for that, and let me show how in the next section
Solution
First of all, let's define all the required steps of our solution:
- parse JSON files and extract variables parts from it
- define template for yaml pipeline source code
- generate yaml pipeline using template and extracted variables
Parse JSON files and extract variables
We create a small Sparrow6 task for this:
tree .tom/tasks/parse-json/
.tom/tasks/parse-json/
├── task.bash
└── task.check
task.bash:
cat $(config file)
task.check:
between: { ^^ \s+ '{' } { ^^ \s+ '}' }
regexp: ^^ \s+ '"' (\S+) '"'
end:
code: update_state({ list => [ map {$_->[0]} @{captures()}] });
Comments to the code:
task.bash
task just cats file passed asfile
parameter, it's a Bash scripttask.check
defines code gets run aftertask.bash
is executed, the code written in Sparrow6 Task Check Language. It's handy when one need to analyse/parse output from script.We
task.check
to parse STDOUT comes fromtask.bash
( json file content ) and capture variables names, see how bellowbetween:
expression narrows down the search context to anything between{
and}
symbols proceeded by spaces started from the beginning of the line, that correspond our "variables" section.regexp:
expression captures variables names occurred within between block.code:
expression saves captured data as a task state, that allows task to return data back to the caller. Code expression written on Perl5.
Define template
Sparrow6 provides DSL for Perl5 TemplateToolkit. So we gonna use it as templater engine.
.tom/templates/pipeline.tmpl:
[% FOREACH i IN list -%]
- [% i %]: "\$([% i %])"
[% END -%]
Generate pipeline
Finally let's write high-level Perl6 scenario that glue all the bits together:
.tom/generate-pipeline.pl6:
#!perl6
my %state = Sparrow6::Task::Runner::Api.new(
name => "extract variables",
root => ".tom/tasks/parse-json",
parameters => %(
file => "vars.json"
)
).task-run;
template-create 'pipeline.yml', %(
source => ( slurp '.tom/templates/pipeline.tmpl' ),
variables => %(
list => %state<list><>
),
);
Comments to the code:
We combine Sparrow6 core function -
template-create
with custom task -tasks/parse-json
As I said, Sparrow6 is super flexible in gluing different parts/languages together
Output example
Let's run scenario through the Tomtit task runner and see output:
tom generate-pipeline
19:24:16 06/12/2019 [repository] index updated from file:///home/melezhik/projects/repo/api/v1/index
19:24:19 06/12/2019 [extract variables] {
19:24:19 06/12/2019 [extract variables] "variables":
19:24:19 06/12/2019 [extract variables] {
19:24:19 06/12/2019 [extract variables] "environment": {"value":"dev"},
19:24:19 06/12/2019 [extract variables] "region": {"value":"southcentralus"},
19:24:19 06/12/2019 [extract variables] "cluster_name": {"value":"CHANGEME!!!"},
19:24:19 06/12/2019 [extract variables] "workernodes_amount": {"value":"4"}
19:24:19 06/12/2019 [extract variables] }
19:24:19 06/12/2019 [extract variables] }
19:24:19 06/12/2019 [extract variables]
[task check] stdout match (r) <^^ \s+ '"' (\S+) '"'> True
19:24:21 06/12/2019 [create template pipeline.yml] content generated at /home/melezhik/.sparrow6/tmp/452443/content.tmp
[task check] stdout match <content generated> True
19:24:28 06/12/2019 [create template pipeline.yml] Files /home/melezhik/.sparrow6/tmp/452443/content.tmp and pipeline.yml differ
19:24:28 06/12/2019 [create template pipeline.yml] updating target pipeline.yml ...
19:24:28 06/12/2019 [create template pipeline.yml] outthentic_message: updated ok
19:24:28 06/12/2019 [create template pipeline.yml] --- /home/melezhik/.sparrow6/tmp/452443/content.tmp 2019-06-12 19:24:21.695451100 +0000
19:24:28 06/12/2019 [create template pipeline.yml] +++ pipeline.yml 2019-06-12 19:24:28.885209600 +0000
19:24:28 06/12/2019 [create template pipeline.yml] @@ -1,4 +0,0 @@
19:24:28 06/12/2019 [create template pipeline.yml] -- environment: "$(environment)"
19:24:28 06/12/2019 [create template pipeline.yml] -- region: "$(region)"
19:24:29 06/12/2019 [create template pipeline.yml] -- cluster_name: "$(cluster_name)"
19:24:29 06/12/2019 [create template pipeline.yml] -- workernodes_amount: "$(workernodes_amount)"
19:24:29 06/12/2019 [create template pipeline.yml] target pipeline.yml updated
19:24:29 06/12/2019 [create template pipeline.yml] set target mode to 644
[task check] stdout match <target \s+ \S+ \s+ updated> True
That is it. Thank you for reading. Comments, feedback are always welcome.
Top comments (0)