menuMenu
Home
Disclaimer
Philosophy
Samples
deployer.ziniki.org
dynamo
neptune
envs
User Guide
Command line syntax
Language features
Blocking
Tokenization
Intepretation
Expressions
Resolution
Naming
Properties
Adverbs
Environment
Tearing Down
Modules
Core
AWS
Dreamhost
Developer guide
Phases
Extension Points
Adding extensions - building a module
Start up code
Base classes
Coins
Releases
menuZiniki Deployer
Most deployment tools (for example, Terraform or CloudFormation) seem to be written as an afterthought, with the most important criterion being how easy it is to build them, and to cover as much infrastructure as possible with the smallest amount of code possible. Little or no thoughts seem to be given to how easy it is to build, debug, validate and understand the configurations they are used to create.
Obviously, scripts or programs that are hard to understand are expensive to maintain and are more likely to contain bugs that are hard to spot. Identifying subtle security issues is guaranteed to be hard if you have to decipher a custom policy nested inside a JSON document. And yet, this is what platform engineers are expected to do when deploying expansive products.
Ziniki Deployer is different. It is designed, from the ground up, to be a programming language that helps you to build modular, task-based scripts that are easy to understand and reason about. Currently, it lacks support for a broad range of platforms and functions, but it is build in a _modular_ fashion that makes it possible to support those in a consistent way as they are needed. More importantly, it makes the features it does support easy to use.
Ziniki Deployer has six major attributes that make it different to other deployment tools out there:
  • It has a focus on clarity: scripts should clearly communicate what they do, and not get lost in the minutiae of how they do it; scripts should be written in a language that is natural for the task at hand, not some general-purpose markup language.
  • It is target based: in their desire to pretend that it is possible to describe everything you want to do declaratively, most deployment tools lose sight of the fact that some operations require messy reality to be involved: servers may need restarting in order to notice a configuration change, for example. Ziniki Deployer assumes that *you* know what processes you will want in place and allows you to create targets that affect just part of your infrastructure: starting and stopping instances or services in accordance with your needs.
  • It is task oriented: Ziniki Deployer assumes that you have in mind "something that you want to do" and will just want to issue a command to do that. It does not require you to cobble together a number of operations in some broader script: all the operations should be able to be placed inside a Ziniki Deployer script.
  • It operates idempotently>: CloudFormation attempts to pretend that it makes changes to infrastructure "atomically" and, if something goes wrong, starts to roll back the changes that it has made. While this is a virtuous goal, the fact that AWS architecture is not, in fact, atomic means that you often get stuck in a state where some of the changes have been applied and some rolled back. Ziniki Deployer does its best to ensure that the world is how it thinks it is before it starts operating (it gets its information about the state of the world directly from the source objects, not an internal "stack"), but if it does fail for any reason, it leaves the job "half done" and will then pick up where it left off after the (necessary) human intervention has resolved the problem.
  • It is modular: how modular? So modular that with no modules installed, Ziniki Deployer is not, in fact, a task-based deployment tool, but simply a parser. Everything from the idea of targets to its understanding of AWS primitives comes from one or more modules. Although this means that natively it has no understanding of your environment, it also means that it is just as capable of supporting Azure as it is of supporting AWS. It also means that if you don't like what we have provided, you can easily add your own.
  • It expects you to use composition: there are idempotent primitives for many cloud infrastructure elements. You could define all your configurations using these basic building blocks, but as humans, we don't think that way. A cloudfront distribution requires five objects to be built and linked together just so: the cloudfront.distribution.fromS3 composite does that for you. Likewise, the lambda and api.gateway.v2 composites require you to provide all the necessary parameters and build and link all the primitives together.

A Worked Example

Let's look at how those work through the lens of a simple example, the script that deploys this website (yes, we eat our own dog food, here).
The first thing to note about deployer scripts is that they are designed to be "semi-literate" in the tradition of Miranda, Haskell and FLAS. We expect you to write more lines of description about your script than you do actual commands.
Anything that starts in column zero is assumed to be commentary and ignored by the script (although hopefully not by developers). Blank lines (including lines with some white space) are also ignored. Use them freely.
For all code lines (i.e. all other lines), indentation is significant. You may use any combination of leading spaces and tabs that appeals to you, but there is no conversion between the two and you must be consistent. Hopefully, if you are inconsistent, you will receive a very clear error about what you have done wrong, but always bear in mind that there is no 100% reliable way of converting between tabs and spaces, so we simply don't try.
Each level of indentation constitutes a sub-element of the parent (less indented) element. It is up to each element as it is created to define what sub-elements it will allow.
1This is the deployer dogfood script.
3It is responsible for keeping the deployer.ziniki.org website up to date
A target wraps up a sequence of operations or assertions in a sequence. This is one of a very few "top level" elements, i.e. elements that can appear at the minimum level of indentation in the file. It contains actions, that is, each element appearing one level indented from a target must be a deployment action.
5The deployer_ziniki_org task is responsible for getting all the infrastructure
6in place and up and running.
8    target deployer_ziniki_org
One of the key attributes of deployer scripts is that they are idempotent. Much of the infrastructure is built up using the concept of coins, which are elements of infrastructure which can be created and destroyed by using ensure. This says that if the item already exists, leave it as it is (or update it if the script indicates changes); if the item does not exist, create it.
In deployer scripts, commands that generate values can store those values in variables. They do this by appending => variable to the end of the command line. Not all commands generate a value; in that case, it is an error to attempt to assign it. Commands that do not have side-effects but do generate a value require that value to be assigned to a variable and failure to do so is an error.
10All of the content is placed in an S3 bucket, "deployer.ziniki.org".
11So the first step is to create that.
13        ensure aws.S3.Bucket "deployer.ziniki.org" => deployer_bucket
14            @teardown delete
Environment variables are a good way of handling externalities in scripts. They can be easily set on the command line; they can be configured inside tools; and the deployer allows them to be specified in files given to the deployment using the -e argument.
16Find the content on the disk.  This is going to be in different places on
17different machines, so start off by using an environment variable to identify
18where the deployer website directory is.
20        env "DEPLOYER_WEBSITE_DIR" => root
files.dir is a command that is used to navigate to a sub-directory of a directory.
22Inside that is an "IMAGE" directory.  This script assumes that all of the website
23content has been processed and placed in that directory and can then just be
24mirrored into the bucket to display the website.
26        files.dir root "IMAGE" => src_dir
files.copy copies all the files from the source to the destination. Both source and destination can be anything that knows how to copy (or pour) file contents from one place to another.
As with everything in the deployer, the idea is that this command should be idempotent, and that means that the consequence of this operation is that, at the end, the contents of the destination should exactly match the contents of the source; and, the minimum number of transfers should be performed.
Sadly, this is not true at the moment. It is just a copy operation.
28Mirror the contents of the source directory into the bucket.  In theory, this
29should ensure that the contents are exactly the same with the minimal possible effort.
31        files.copy src_dir deployer_bucket
The next step in the script is to create a certificate. In order to know how to create a certificate, the ensure action needs to be given some properties. The properties have a standard form which is to give the name of the property followed by a left arrow (<-) followed by an expression of the appropriate type.
In this example, only a few simple expressions are used. There is complete documentation on the expression parser elsewhere, or at least there will be.
When creating this certificate, we depend not only on the AWS module but the dreamhost module. This must be specified on the command line and offers a DNS asserter using the Dreamhost API. This enables you to automatically issue certificates on AWS even if your registrar is elsewhere.
There is currently a bug with Dreamhost specifically where the API does not permit the CNAME records generated by AWS for validation. I have reported this, but have no current date on when it will be fixed.
33In order to use cloudfront with a custom domain, we need to have a certificate.  AWS
34Certificate Manager can issue one of those for us, providing we can "prove" we own the domain.
35Since I do in fact own the domain, I can do this and specify that I will prove this
36using the "DNS" method using the "dreamhost" provider.
38        ensure aws.CertificateManager.Certificate "deployer.ziniki.org" => cert
39            @teardown delete
40            ValidationMethod <- "DNS"
41            ValidationProvider <- "dreamhost"
cloudfront.distribution is an example of a composite pattern. In order to set up a cloudfront distribution, you need to create a network of interacting infrastructure objects, on top of things like the bucket and certificate that are truly external to the configuration. It is possible to configure all those elements separately using ensure and coins (and there is an example of that), but it is hard work and requires you to remember all the objects you need to create, which order to create them in, and link them all together. The composite used here makes everything much simpler.
43Then we can set up a cloudfront distribution for the website.  This is a complex
44beast and it has a number of moving parts.  In setting this up, we can reference the
45things we have created above (such as the bucket and the certificate).
47        cloudfront.distribution.fromS3 "for-deployer" => cloudfront
48            @teardown delete
49            DefaultRoot <- "deployer_website.html"
50            Bucket <- deployer_bucket
51            Comment <- "a distribution for deployer.ziniki.org"
52            Certificate <- cert->arn
53            Domain <- []
54                "deployer.ziniki.org"
55            MinTTL <- 300
56            TargetOriginId <- "s3-bucket-target"
Lists and maps can be complicated things to represent neatly in scripts; deployer offers three options to make it as simple as possible. If you have a singleton list, you can just write the element with no special syntax. If you have a short and concise list, or a simple map, you can write it all on one line within appropriate brackets or braces and with the elements separated by commas. Or, if you have a more complex structure, you can assign the "empty" value to the property and then use an indented scope to insert the values.
58The CacheBehaviors are instructions on headers to return based on the filenames.
59For us, we want to return "text/html" for ".html" files and "text/css" for ".css" files.
61            CacheBehaviors <- []
62                {}
63                    SubName <- "html"
64                    PathPattern <- "*.html"
65                    ResponseHeaders <- {}
66                        Header <- "Content-Type"
67                        Value <- "text/html"
68                {}
69                    SubName <- "css"
70                    PathPattern <- "*.css"
71                    ResponseHeaders <- {}
72                        Header <- "Content-Type"
73                        Value <- "text/css"
74                {}
75                    SubName <- "js"
76                    PathPattern <- "*.js"
77                    ResponseHeaders <- {}
78                        Header <- "Content-Type"
79                        Value <- "text/javascript"
We can create a CNAME record with the DNS registrar if it is not there (or update it if it is). This is required to make sure that we have the custom domain name we want (in this case deployer.ziniki.org) point to the domainName provided by cloudfront.
81Finally, we can set up the custom domain name.  Again, because the registrar is
82Dreamhost, this is done there (the equivalent on AWS would be an aws.Route53.CNAME).
84        ensure dreamhost.CNAME "deployer.ziniki.org"
85            @teardown delete
86            PointsTo <- (cloudfront->domainName)
Each file can have multiple targets in it; one or more targets can be specified on the deployer command line. The first target was all about creating the cloudfront distribution. This one is about updating it.
In general, the "end point" of both paths should be the same, and it may often be desirable to just have the one path and always use the same command; on the other hand it may be clearer (and quicker) to have a custom path for updating content.
It is obviously also important to consider permissions: fewer permissions may be required in order to update the website content than to create all the component parts.
88When it's time to update the content, we simply upload the new content and
89invalidate the cloudfront cache.
91    target upload_deployer_content
The ensure verb looks for an infrastructure item and makes sure that it is there. The find verb is responsible for locating an item if it exists, but will not create it if it does not.
93The first step here is to find the existing bucket and cloudfront distribution
94using their unique names.
96        find aws.S3.Bucket "deployer.ziniki.org" => deployer_bucket
97        find aws.CloudFront.Distribution "for-deployer" => cloudfront
The process of finding and copying the files is exactly the same as before
As noted above, this currently copies all the files rather than updating the ones that have changed. This is a bug to be fixed later.
99We are going to copy the contents from the same directory as before; the directory
100is provided in the DEPLOYER_WEBSITE_DIR enviornment variable.
102        env "DEPLOYER_WEBSITE_DIR" => root
104Find the IMAGE dir and copy files as before.
106        files.dir root "IMAGE" => src_dir
108        files.copy src_dir deployer_bucket
cloudfront.invalidate is different to most of the operations here, in that it is not idempotent and does not attempt to check if it has already been performed. It always operates by invalidating the current cloudfront cache, even if no changes have been made.
Two additional features are planned for deployer that would allow you to address this. Firstly, if and when the files.copy operation is an update operation, it will be possible to tell if it has, in fact, updated any files. Secondly, there will be a case verb that allows conditional execution. Combining these two will enable scripts to only invalidate the cache if files have changed.
110The cloudfront.invalidate task takes care of invalidating the distribution with a
111given identifier.
113        cloudfront.invalidate cloudfront->distributionId

Getting Started

In order to get started, the first step is to download the latest deployer binaries. Then create a directory to store your scripts. Using the sample above, the other samples and the various tasks from the modules, construct an appropriate script for your use case.
Then run the deployer like so:
$DEPLOYER_HOME/deployer -m coremod.so -m awsmod.so target ...
menuDisclaimer
This software is a pre-beta release. It is neither complete nor fully functional.
The software is presented as-is and open source. You are free to examine, debug, modify and submit changes ("pull requests"). See the development section for more details.
This documentation is provided as an aid to users and developers, but does not make any representations about the past, current or future functionality of the software.
Both software and documentation are provided without any warranty, including the implied warranty of merchantability or fitness for a particular purpose.
menuDeployer sample
This is the deployer script used to stand up and refresh deployer.ziniki.org.
1This is the deployer dogfood script.
3It is responsible for keeping the deployer.ziniki.org website up to date
5The deployer_ziniki_org task is responsible for getting all the infrastructure
6in place and up and running.
8    target deployer_ziniki_org
10All of the content is placed in an S3 bucket, "deployer.ziniki.org".
11So the first step is to create that.
13        ensure aws.S3.Bucket "deployer.ziniki.org" => deployer_bucket
14            @teardown delete
16Find the content on the disk.  This is going to be in different places on
17different machines, so start off by using an environment variable to identify
18where the deployer website directory is.
20        env "DEPLOYER_WEBSITE_DIR" => root
22Inside that is an "IMAGE" directory.  This script assumes that all of the website
23content has been processed and placed in that directory and can then just be
24mirrored into the bucket to display the website.
26        files.dir root "IMAGE" => src_dir
28Mirror the contents of the source directory into the bucket.  In theory, this
29should ensure that the contents are exactly the same with the minimal possible effort.
31        files.copy src_dir deployer_bucket
33In order to use cloudfront with a custom domain, we need to have a certificate.  AWS
34Certificate Manager can issue one of those for us, providing we can "prove" we own the domain.
35Since I do in fact own the domain, I can do this and specify that I will prove this
36using the "DNS" method using the "dreamhost" provider.
38        ensure aws.CertificateManager.Certificate "deployer.ziniki.org" => cert
39            @teardown delete
40            ValidationMethod <- "DNS"
41            ValidationProvider <- "dreamhost"
43Then we can set up a cloudfront distribution for the website.  This is a complex
44beast and it has a number of moving parts.  In setting this up, we can reference the
45things we have created above (such as the bucket and the certificate).
47        cloudfront.distribution.fromS3 "for-deployer" => cloudfront
48            @teardown delete
49            DefaultRoot <- "deployer_website.html"
50            Bucket <- deployer_bucket
51            Comment <- "a distribution for deployer.ziniki.org"
52            Certificate <- cert->arn
53            Domain <- []
54                "deployer.ziniki.org"
55            MinTTL <- 300
56            TargetOriginId <- "s3-bucket-target"
58The CacheBehaviors are instructions on headers to return based on the filenames.
59For us, we want to return "text/html" for ".html" files and "text/css" for ".css" files.
61            CacheBehaviors <- []
62                {}
63                    SubName <- "html"
64                    PathPattern <- "*.html"
65                    ResponseHeaders <- {}
66                        Header <- "Content-Type"
67                        Value <- "text/html"
68                {}
69                    SubName <- "css"
70                    PathPattern <- "*.css"
71                    ResponseHeaders <- {}
72                        Header <- "Content-Type"
73                        Value <- "text/css"
74                {}
75                    SubName <- "js"
76                    PathPattern <- "*.js"
77                    ResponseHeaders <- {}
78                        Header <- "Content-Type"
79                        Value <- "text/javascript"
81Finally, we can set up the custom domain name.  Again, because the registrar is
82Dreamhost, this is done there (the equivalent on AWS would be an aws.Route53.CNAME).
84        ensure dreamhost.CNAME "deployer.ziniki.org"
85            @teardown delete
86            PointsTo <- (cloudfront->domainName)
88When it's time to update the content, we simply upload the new content and
89invalidate the cloudfront cache.
91    target upload_deployer_content
93The first step here is to find the existing bucket and cloudfront distribution
94using their unique names.
96        find aws.S3.Bucket "deployer.ziniki.org" => deployer_bucket
97        find aws.CloudFront.Distribution "for-deployer" => cloudfront
99We are going to copy the contents from the same directory as before; the directory
100is provided in the DEPLOYER_WEBSITE_DIR enviornment variable.
102        env "DEPLOYER_WEBSITE_DIR" => root
104Find the IMAGE dir and copy files as before.
106        files.dir root "IMAGE" => src_dir
108        files.copy src_dir deployer_bucket
110The cloudfront.invalidate task takes care of invalidating the distribution with a
111given identifier.
113        cloudfront.invalidate cloudfront->distributionId
menuDynamo sample
This is an example of creating dynamo tables
1This is the target to put up (and pull down) all the infrastructure.
3    target create_dynamo_table
5Dynamo databases are serverless.  In order to use Dynamo, all you need to do is create
6a _Table_.
8Dynamo Tables are modelled in deployer as Coins, using the identifier aws.DynamoDB.Table.
9As with all idempotent infrastructure, the table must have a unique name, where unique
10is "within the range of dynamo tables".  If this name already exists, the properties here
11will be used to update it; if it does not exist, they will be created.
13        ensure aws.DynamoDB.Table "Stocks"
15The @teardown adverb tells ensure what to do when tearing down this Coin.  In this case,
16we specify _delete_, which means throw away all the contents and delete the bucket.  The
17obvious alternative is _preserve_, which does not delete the bucket.  This enables a script
18to ensure that something it depends on does already exist, but assumes that in the usual
19course of events someone else will have created it and depend on it ("owns" it, as Rust
20would say).
22            @teardown delete
24Specifying DynamoDB tables can be complicated because you need to specify both AttributeDefinitions
25and KeySchema, but end up giving the same fields for both (or throw an error).  Deployer
26avoids this by allowing you to specify any number of fields of any types, but only using
27those which have a nested @Key adverb, the values of which can be _hash_ or _range_.
29            Fields <= aws.DynamoFields
30                Symbol string
31                    @Key hash
32                Price number
menuNeptune sample
This is an example of how to create a Neptune cluster and instance, and then destroy the instance without tearing down the whole cluster.
1Provisioning a Neptune cluster requires two separate things to be created: a Cluster and an
2Instance.  It is possible to leave the cluster in place (with your data but minimal cost) but
3delete the engine (which costs more and is only needed when the cluster is active).
5The create_neptune_cluster creates both from scratch if neither, but if the cluster already
6exists, it will find the cluster already present in the cloud, create a new primary instance
7and connect the two together.  If both already exist, it does nothing.
9    target create_neptune_cluster
11Neptune has to run "inside" a VPC.  In order to do this, it requires you to create a special
12"SubnetGroup" (specific to Neptune; this is not a VPC thing).  Theoretically, it is possible
13to create this within this script, but that coin does not yet exist.  It is, however, possible
14to find one that already exists, and store it in the variable _subnet_.
16        find aws.Neptune.SubnetGroup "neptunetest" => subnet
18Creating the cluster requires a unique name; we can store the resulting cluster (regardless
19of whether we found an existing one or created one) in the variable _cluster_.
21        ensure aws.Neptune.Cluster "user-stocks" => cluster
22            @teardown delete
24Associate this with the SubnetGroup from the _subnet_ we found above.  Note that the property
25implies we want a _Name_ but it is happy to accept a whole _SubnetGroup_.  This is a common
26pattern in deployer: where it is clear that a specific type of object is acceptable rather
27than its name or arn, it will be accepted and the appropriate value extracted.  If deployer
28cannot "understand" what you have passed it, you will receive an appropriate type error.
30            SubnetGroupName <- subnet
32To use the cluster with instances of the _db.serverless_ type, it is necessary to specify
33a capacity range.
35            MinCapacity <- 1.0
36            MaxCapacity <- 1.0
38We need to create a primary Neptune instance in order to actually do any work with the database.
40        ensure aws.Neptune.Instance "primary"
41            @teardown delete
43We need to associate this with the _cluster_ we just created above.
45            Cluster <- cluster
47Specify what type of AWS instance should be used to run the database.  Here we specify
48"serverless", indicating we don't want to provision a whole server.  Note that all AWS
49instance types here need to be prefixed with "db.", although that is poorly explained in
50the manual.  Such values are accepted by deployer, but any values without a "db." prefix
51have it automatically added.
53We have used an explicit string here, but any string expression would be acceptable.
55            InstanceClass <- "serverless"
57DB Instances cost money even when idle, so it is reasonable as a developer to close
58them down when not being used.  The cluster and the data will remain in place, and
59a new primary will be started up when the target above is re-run.
61    target drop_primary
62        find aws.Neptune.Cluster "user-stocks" => cluster
64        ensure aws.Neptune.Instance "primary"
65            @teardown delete
67The @destroy adverb in a target identifies a specific piece of infrastructure to be
68destroyed when the whole script is run with the --destroy flag.
70            @destroy
menuSample Environment
This is a simple example of using a file to set environment variables. This has the advantage of allowing each user (and continuous build tools) to create an environment which uses the same script but customized to their own credentials and directory structure.
1# The AWS profile to use to configure things
3AWS_PROFILE=ziniki-admin
menuCommand Line Syntax
The following options are supported by the deployer:
-d <dir> - specify a directory to read target files.
-e <envs> - read a file from one of the specified directories containing environment variable definitions.
-m <module> or --module <module> - include the specified module file as a module in the deployer.
--teardown - reverse the process and tear down everything that it is not protected by a suitable @teardown adverb in the chosen targets.
--destroy - allow objects tagged with @destroy to be destroyed.
target - the name of a target in a target file.
Note that all of the files in all of the specified directories will always be read and parsed whenever the deployer is run. The intention here is to ensure that there is a limit to the amount of "drift" that can take place. But only those targets which are explicity named on the command line will actually be executed. One consequence of this is that unexpected errors may occur from files that you were not expecting to be read.
menuBlocking
The first step in analyzing a deployer file is to divide it into lines and blocks.
Every statement must be given on a single line. There are no continuation lines of any sort. However, each statement can have an arbitrary number of inner "child" lines that change the meaning of the "parent" statement.
The parent-child relationship is entirely determined by white space indentation. No other syntax (such as curly braces) is or can be used to indicate indentation. Indentation is strictly within a file and it is not possible to continue definitions inside another file.
All lines that consist entirely of white space characters cannot have meaning. These lines are ignored by the parser.
All lines that do not have any leading white space characters are considered to be commentary lines. These lines are ignored by the parser.
The first non-blank line with leading white space is considered to be the "first" line of the file. All subsequent lines must either have at least the same exact white space characters.
The only significant leading white space characters are the standard space (ASCII 32) and tab (ASCII 9) characters. No other white space characters (such as invisible white space or non-breaking space) are allowed and using them will result in errors.
Any mixture of spaces and tabs may be used, but in any given scope, exactly the same mixture of spaces and tabs must be used at the beginning of the line. A tab is a tab and a space is a space. There is no conversion between the two.
Lines subsequent to the first must have either:
  • Exactly the same white space prefix as one of the prior lines still in scope;
  • The same white space prefix as the previous non-blank, non-commentary line together with further white space characters.
The current scope consists of the most recent line at each level of lesser indent than the current line. All other lines are considered out of scope.
Because white space characters are by definition impossible to see and hard to talk about, they are translated internally into "S" for space and "T" for tab. Any errors about invalid indentation will say things like "SSSS" is not valid in scope with "T": most likely you have mixed a four-space tab with a line beginning with four spaces. On the screen they look the same, but they are not. (On their own, either would be valid; it is the mixture which is not. The deployer takes the first one as being definitive; it does not attempt to understand what you might have meant.)

An Example

Consider the following sample input (not a valid deployer file, but for illustrative purposes):
    this is the first line, with four spaces at the front
      the second line begins with six spaces
        the third line begins with six spaces followed by a tab
      the fourth line begins with six spaces
        the fifth line begins with eight spaces
    the sixth line jumps back to four spaces
From the point of view of the blocker, the text of the line can be ignored, and the white space represented as follows:
SSSS
SSSSSS
SSSSSST
SSSSSS
SSSSSSSS
SSSS
Each of these six lines is valid because:
  1. the first line can have any combination of spaces and tabs
  2. this has SSSS as a prefix, so is a child of (1)
  3. this has SSSSSS as a prefix, so is a child of (2)
  4. this has indentation of exactly SSSSSS, so is also a child of (1) and a sibling of (2)
  5. this has SSSSSS as a prefix, so is a child of (4)
  6. this has indentation of exactly SSSS, so is a second top level element and a sibling of (1)
On the other hand, the following would not be valid:
  • The second line could not begin T, because the first line has set the context that all lines in the file must begin SSSS.
  • The third line could not begin SSSSS, because it must either be an extension of the full text of the previous line (SSSSSS), or it must be exactly identical to one of the previous lines in scope (SSSS or SSSSSS).
  • lines in the file must begin SSSS.

Tokenization and Interpretation

The indentation is used to group individual lines into blocks of lines. Each nested block of lines constitutes a scope. Each line is then passed to the tokenizer to be translated into tokens. The meaning of the line is then determined by the interpreter in force in that scope.
As this process takes place, each tokenized and interpreted line is "attached to" its outer scope (the top level scope for lines at the top level of indentation). The result of this is to build a parse tree (technically an orchard), where each top level definition represents a potential root of a tree.
menuTokenization
Once the file has been broken into scopes by indentation, each line can be considered separately and tokenized. The tokenization process is entirely context-free and standardized. The modules cannot affect tokenization in any way.
There are six types of tokens:
  • punctuation;
  • symbols;
  • numbers;
  • identifiers;
  • adverbs;
  • and strings.
Punctuation tokens always appear as individual characters and are exactly the following:
  • ( - open parenthesis;
  • ) - close parenthesis;
  • [ - open (square) bracket;
  • ] - close (square) bracket;
  • { - open curly brace;
  • } - close curly brace;
  • : - colon;
  • , - comma.
All of these characters are used in expressions to group and build lists and maps.
Symbol tokens consist of groups of one or more of the following symbol characters appearing consecutively without intervening whitespace or other tokens:
  • + - plus;
  • - - minus;
  • * - star;
  • / - slash;
  • < - less than;
  • = - equals;
  • > - greater than;
  • & - ampersand;
  • | - vertical line;
  • ! - exclamation point;
  • $ - dollar;
  • % - percent.
Individually or in groups, symbols made from these characters can be assigned function defintions and can then be used in expressions.
Numeric tokens consist of valid IEEE 754 floating point numbers or valid hexadecimal numbers with the prefix 0x. Examples include:
  • 0
  • 26
  • -13
  • 102.73
  • 1e3
  • 7.3e-4
  • 0x0
  • 0x3f
  • 0x8000
Numeric tokens always resolve to a floating point number. When integers are required, deployer will ensure that an appropriate integer is used.
Identifier tokens consist of a sequence of characters including letters, digits and the characters . (dot) and _ (underscore). Identifiers cannot start with a valid number, e.g. 24hours will be broken into two tokens 24 and hours. Examples include:
  • a
  • hello
  • pi
  • aws.lambda
  • the_gateway
  • ec2.linux
  • s3.my_bucket.has_name
Identifier tokens are used as names and must be attached to verbs, functions or variables.
Adverb tokens consist of an @ (at) symbol followed by one or more letter characters. Adverbs must either continue to the end of the line, or must be terminated with a white space character. They should be written in camel case. Examples include:
  • @teardown
  • @destroy
  • @mustExist
  • @mayCreate
  • @cleanUp
Adverb tokens are used in adverb clauses (usually with one argument) to modify the meaning or intent of a verb.
String tokens consist of an arbitrary number of characters surrounded by single (') or double (") quotation marks. In order to contain the quotation mark within the string, it should be included twice in succession. Examples include:
  • "hello, world"
  • 'my name is Alex'
  • "he said ""hello"" to me."
  • 'Alex''s pen'
To write two strings next to each other, either the alternative quote marks must be used, or a space must be left between them.
The use of any other characters will result in a syntax error.

Using White Space

There is some ambiguity in interpretation of sequences of symbols. Where such ambiguity exists, the tokenizer will favour greedy consumption of characters (i.e. the first token will take them). White space can always be used to separate symbols. For example:
  • 7 e3 not 7e3
  • 'Alex' 'says hello' not 'Alex''says hello'
  • f x not fx
menuInterpretation
Once the file has been broken into scopes by indentation, and each line has been tokenized, the tokens in a line can be interpreted. This process takes place using an interpreter. The interpreter in force in any given scope depends on the interpretation of the line defining the scope. The top level interpreter is "hardcoded" by the top level module.
In general, the core module will be in use and that defines the top level interpreter as a verb interpreter, which processes the verbs associated with the top-level extension point.
At the time of writing, the only top level verb is target, which begins the definition of a deployment target.
Modules can, and should, introduce interpreters for specific situations. However, the driver supports three built-in forms of interpretation.

The Verb Interpreter

The verb interpreter takes the set of input tokens from a line and requires that the first one is an identifier (anything else is an error). That identifier is used to find a suitable handler from a map associated with an extension point with which the interpreter is configured. That handler is then expected to process the tokens and return an interpreter for the nested content.
The verb interpreter allows any definitions on a line to be assigned to variables. This is done by putting the assign operator (=>) followed by an identifier which is the name of the variable to be assigned at the end of the line.

The Properties Interpreter

The notion of processing properties is very common in deployer contexts, and as such the properties interpreter is responsible for handling these.
Each line is either of the form @adverb expr... or name <- expr. In the first case, the expressions are parsed and the adverb is attached to the parent context; in the second case, the expression is parsed and the name, expr pair is passed to the parent context.

The Expression-only Interpreter

In a number of cases (specifically when processing nested lists), it is possible for each line to represent a single value. The expression-only interpreter handles this situation by treating each line as a single expression and attaching its parsed form to the parent collector.
menuResolution
After all the input files have been parsed, all the identifiers and symbols referenced in the files are resolved in a step known as resolution.
Most of the identifiers will refer to elements that are defined in code, for example + refers to the addition operator, or hours refers to the built-in function to turn a number into a duration in hours.
It is also possible to define names in files. Functions defined in modules may do this for any definitions they offer; by default, the only way to do this is to introduce variables using the verb interpreter.
Either way, the definitions are stored in an internal dictionary which understands the concept of block-based scope. After parsing is complete, all the statements in the file are recursively traversed and all identifiers are examined and associated with the appropriate definition. If no definition can be found, an error is generated.
menuExpressions
Expression parsing in the deployer is significantly different from most programming languages, and very different to the traumatic experience of trying to describe calculated values in Terraform or CloudFormation. The intent has always been to try and make it possible to describe "values" in the most natural way possible.
It is context dependent where expressions start and finish, and whether only a "single" token is to be used for an expression, or the rest of the line. (A "single" token describes something like a number of a string, but also covers any whole expression beginning with a parenthesis, bracket or brace.)
If you are in any doubt as to how an expression would be parsed, you can resolve that by including the sub-expression you want to be evaluated first in parentheses. At the same time, lists (enclosed in brackets) and maps (enclosed in curly braces) will take precedence over any other operators.
Within each sub-expression, parsing proceeds from left to right. Each symbol is identified as either a function, an operator, an variable, a constant or a sub-expression. Functions and operators are essentially the same: the only difference is in the syntax. A function is an identifier which has been bound to a function, rather than a variable. An operator is a set of symbol characters which have been bound to a function. Each function is characterized as prefix (coming before all of its arguments), postfix (coming after all of its arguments) or infix (accepting arguments before or after the operator symbol). The parser will allow a function to have any number of arguments in the permitted places, although the function definition itself may raise an error if it does not see the number of arguments it desires.
Consider the function sum as an example. The sum function is a prefix function: that is, the token sum comes before all the arguments. If you provide no arguments, it will be happy, and will always evaluate to zero. If you give one argument, it will return the value of that argument. If you give multiple arguments it will add them together.
Likewise, if we consider the hours function, it is a postfix function. The parser will accept multiple arguments followed by hours, but the hours function will raise an error when consulted, because it only accepts one prefix argument. Zero, or two or more prefix arguments are not acceptable.
Every function and operator has a precedence level, and when a single sub-expression contains more than one operator, the parsing algorithm first compares the precedence of the first two operators. If the first operator has higher precedence, it is presented with the whole sub-expression leading up to the second operator and expected to resolve this to a single expression tree; this is used to replace the initial set of tokens and the algorithm is run again on the remaining tokens. If the second operator has higher precedence, then the initial tokens (up to the first operator) are parked and the algorithm run again; ultimately a single expression tree should result and this is appended to the parked tokens and the final result calculated.
If the two operators have the same precedence, the associativity of the first operator is used. If this is left associative, it is treated as higher precedence; if it is right associative the second operator is treated as higher precedence.
Note that method invocation is handled as an infix operator. The operator -> is an infix operator which takes one prefix argument (the object to operate on), one postfix argument (the name of the method) and optionally many more postfix arguments (which will be passed to the method if present).
Deployer has many more postfix operators (and particularly functions with names) that is typical of most programming languages (which often require all named functions to be prefix functions). This is to make expressions such as 24 hours be as natural as possible.
Once again, if you are unsure of what the meaning of your expression will be, use parentheses to make it clear. Others will thank you.
menuModules
Deployer has a very modular architecture. So much so, that if you strip away all the modules, it is not even a deployer - it knows nothing about targets, and can do no work. All of the functionality is contained in the modules. The core of the system (the driver) knows how to parse files and nothing else.
The modularity works by using a set of extension points. The main driver defines four of these (as shown below) and modules can define more as needed. The core module defines five more, which define the core of how deployment works. Other modules then provide implementations on offer for these extension points.
In each of the sections that follows, the module will describe the extensions it provides, grouped by the extension point names.

Extension Points

main-args - this extension point identifies the main entry point of the program, which will be responsible for processing arguments. Only one definition of this extension point is permitted.
top-level - this extension point allows modules to provide verb handlers for specific verb names at the top level of the file.
attacher - this extension point allows items to be attached to the top level of the parse tree. Most elements are attached to an "immediate parent", but top level elements do not have a parent; this code is used to handle that.
At the time of writing, this is not properly supported, since the only valid top level form is the target.
prop-interpreter - this extension point allows modules to attach additional handlers into the property interpreter flow.
function-defn - this extension point allows modules to provide function definitions. The base driver code already includes basic definitions, listed below.

Interpreters

Deployer uses a very strict indentation rule for interpreting files. Lines with text in the first column are treated as commentary and are ignored for all practical purposes. Otherwise, every line must have a whitespace indentation which is consistent with each of the previous lines with equal or less nesting. Users may choose any combination of tabs and spaces, but must make the same choice for each active level of nesting. Scopes can then be identified numerically by how many ancestors (active lines with less nesting) they have.
After handling the indentation, the driver parser breaks each line into a list of tokens, and handles any variable assignment that may be present. The remaining tokens then need to be handled in a way appropriate to the scope (level of indentation and indentation context).
Lines at the top level are always interpreted using the verb interpreter and the acceptable verbs are those attached to the top-level extension point; subsequently, each chosen handler is required to return an appropriate interpreter for its nested content.
verb - this interpreter looks at the first token it receives and requires that it be an identifier. It has a dictionary of acceptable verbs and selects the handler that matches. If no handler matches, it reports an error and returns the ignore interpreter.
properties - this interpreter looks for patterns of the form a <- b and calls the parent scope with a property definition. It also handles the nested case a <= handler where handler is the name of an interpreter defined in the scope which reads the nested scope in order to produce a property value.
The property interpreter also handles adverbs. Any adverb token will be passed, along with its argument, to an enclosing scope handler.
ignore - this interpreter simply ignores all content and returns itself. This is mainly useful to avoid cascading errors: when a handler encounters an error, it can report it directly and then return this interpreter to avoid having additional (usually spurious) errors be reported to the user.
disallow - this interpreter simply raises an error that content cannot appear here and then ignores all nested content.

Builtin Function Definitions

-> - method invocations are performed using the invoke operator ->. This is defined at the driver level. It is an infix operator with precedence 10 and left associative; it accepts one prior parameter (the object) and requires at least one post parameter (the method name). Any additional post parameters are passed to the method invocation as arguments.
+ - an infix operator with precedence 5 which is left associative. It takes one prior and one post parameter and adds them together.
- - an infix operator with precedence 5 which is left associative. It takes one prior and one post parameter and subtracts the second from the first.
* - an infix operator with precedence 6 which is left associative. It takes one prior and one post parameter and multiplies them together.
/ - an infix operator with precedence 6 which is left associative. It takes one prior and one post parameter and divides the first by the second.
sum - a prefix function with precedence 1 which is right associative. It takes zero or more numeric arguments and produces the sum of all of them.

Builtin Symbol Definitions

true - a constant defined to have the numeric value 1.
false - a constant defined to have the numeric value 0.
At the time of writing, these exist outside the normal extension point mechanism. There probably should be an extension point for constant-defn or some such, but it is not entirely clear how it should work.
menuCore Module
The core module is what turns the driver into a deployer. It contains the concept of a target, as well as the code to process arguments and dispatch targets. It also defines subsidiary extension points to allow definitions within the provided top level target definition.

Extension Points

target - this is an extension point which allows modules to define verbs which may appear within the scope of a target; in other words, actions that targets may perform.

blank - deployer uses the metaphor of minting coins to describe how infrastructure components are produced. A blank is any struct which can be found, minted or handled inline within the deployer, especially with the find and ensure verbs.

policy-statements - the core module defines a verb handler to define policies. In addition to the obvious statements allow and deny, different modules and systems may need additional verbs. These can be defined here.

policy-inner - the core module defines a verb handler to define allow and deny. Modules may add additional child content to these.

Top Level Verbs

target <name> - define a target with the given name. The inner scope is defined using a verb interpreter with all the verbs found on the target extension point.

Target Verbs

env <expr> - recover a variable from the user environment if it is available; it will be assigned to the (required) variable on the line. If the environment does not the named variable then an error is reported.

show <expr>... - output the values of the expressions to standard output.

files.dir <expr>... - the arguments are a sequence of path elements; the first must be an absolute path, while the others must be relative. The result is a path which is stored in the (required) variable.
This is written in the context of file source directories; there probably should be an adverb which allows the path to not exist in order to handle destination directories which will be created.

files.copy <src> <dest> - this copies (or updates if supported) all the files in <dest>to match those in <src>.
This is not fully implemented yet. It should have adverbs to control how it operates: for example, it should be possible to say "update only if changed", "always update", "clean any files that are not included". Each of these has its uses and the system cannot always decide for itself which is the best strategy. The safest, and thus the default, is to always copy all the files but not to delete anything.

find <coin> <name> - locate an existing piece of infrastructure. The coin identifies what sort of infrastructure you are looking for, and the name is a unique identifer that can be used to identify a specific instance of that coin. See the section on Naming for more information on names.
There are a number of cases in which find can be used. The most important case is to find items of infrastructure (such as domain names) which either cannot or should not generally be created automatically. Another common case is to find a piece of infrastructure (such as a VPC) which is managed by another deployment script or process. A third case is to find items of infrastructure created locally but which were not captured.
There are some rough edges here that need to be smoothed out in the fulness of time. Firstly, it is to be expected that find will fail immediately if the component cannot be found; but the third case prevents that. The third case exists for components that are created in composites; but it really should be possible to "capture" those in variables rather than doing find at all. Secondly, if we want find to work in different ways, there should be an adverb to control how it functions.

ensure <coin> <name> - locate or create a piece of infrastructure. The coin identifies what sort of infrastructure and the name is a unique identifer that can be used to identify a specific instance of that coin. See the section on Naming for more information on names.
ensure is the central cornerstone of the deployer. Most of the work done in most scripts is to ensure that various coins exist. It is strictly idempotent. It first tries to identify if the named coin exists; only if it does not, will it be created.
If the coin does already exist, it should be updated to match the indicated configuration. Where this is not possible, an error should be raised.
Actually performing these updates is the responsibility of the individual coins; at the time of writing many of them are not good at it. Caveat Emptor! If you suspect that an update is not happening, don't doubt your sanity, doubt the code. Feel free to fix it or get someone else to do so.

coin <coin> [<name>] - some coins do not have a separate existence in the cloud space with an identifiable name, but can only exist as part of a bigger entity. These cannot be "found". Consequently the coin command is used to create these "in memory"; the name is likewise only in memory and does not correspond to anything in the cloud.
All memory coins MUST be bound to a variable so that they can be used in other entities. Failure to do so is an error.

policy - start the definition of an inline policy. The nested content indicates what the rules of the policy should be.
Policies defined at the top level MUST be bound to a variable so that they can be attached to other entities. Failure to do so is an error.

attachPolicy <to> <policy> - attach a previously defined policy to an infrastructure element.

Policy Statements

allow <action> <resource> - allow the specified action to be applied to the specified resource. The nested content may include additional constraints using the policy modifiers.

At least deny should also be supported.

Policy Modifiers

action <action> ... - include additional actions to be permitted with the specified resources.

There should be an equivalent resource command.

principal <type> <id> - specify a role as a principal with a given type and identifier. This is something provided by AWS and may be AWS-specific, in which case it should probably be moved to the AWS module.

condition <test> <left> <right> - some permissions are conditional - for example, based on the role name. It adds a condition element to the surrounding policy action.

Function Definitions

hours - a postfix operator which converts a number into a duration quantity equivalent to that numnber of hours.
menuAWS Module
The AWS module is designed to support the AWS cloud. It will probably only ever support a small portion of the cloud infrastructure, and other modules will be created to support other portions of the AWS cloud (which is now very big).
For the foreseeable future, this module will only implement those features which are needed to support existing projects. There is no plan to support even a consistent subset of AWS functionality.

Extension Points

dns-asserter - this is an extension point which allows modules to specify a means of asserting that they control a DNS name. It is needed to support the workflow to create a certificate.

Target Verbs

cloudfront.invalidate <expr> - invalidate an existing cloudfront distribution by id. The id will need to be recovered by some means, e.g. looking up a cloudfront distribution by name.

lambda.addPermissions <name> - add permissions to a lambda. This is a specific lambda operation and is not about adding permissions to its role. It is specifically used to add permissions allowing the API Gateway to invoke the lambda. The provided name is a name associated with the permission and is not a reference to a pre-existing object. The inner scope of this declaration are policy statements, typically allow.

lambda.publishVersion - publish the "current" version of the lambda. That is, publish a new version of the lambda, referencing the current state of code and configuration. It takes two nested arguments: Name (the lambda function name or ARN) and optionally an Alias, in which case the named alias will be updated with the provided version.

Composites

cloudfront.distribution.fromS3 <name> - create a cloudfront distribution along with all the necessary components to make it work.
Bucket - the bucket or bucket ARN from which to retrieve the files to distribute.
Certificate - the ARN of a certificate in the ACM certificate manager to be associated with the distribution.
Comment - a string value to attach to the Comment field on the distribution object.
Domain - a list of string values which represent the custom domains to be associated with the distribution.
MinTTL - a time value which determines the minimum lifetime of content served by the distribution.
CacheBehaviors - a list of objects to describe the response values for various types of content by path.
TargetOriginId - a unique string that is used to tie together various parts of the cache policy and behavior.

lambda.function <name> - create a lambda function, together with a version and alias.
Runtime - a string describing the desired runtime. Any of the indicated runtimes in the AWS manual are accepted. "go" is also accepted as a synonym for "provided.al2023".
Code - a location indicating where the code is going to come from. Typically, this will use the aws.S3.Location interpreter to define it using a Bucket and Key.
Role - every lambda needs a role to operate as, giving it permissions to execute. This can be simply the ARN of a pre-existing role, or it can be an inline definition of the role using the aws.IAM.WithRole interpreter.
PublishVersion - a boolean indicating if the current configuration should be published as a new version.
Alias - a string value indicating that the current version should be published as an alias with this name.
VpcConfig - a configuration, usually defined with the aws.VPC.Config interpreter, that specifies the VPC configuration to place the lambda in. If not present, the lambda is not placed in a VPC.

api.gatewayV2 <name> - create an APIGatewayV2 with all the necessary components.
Protocol - select the type of gateway you wish to create: "http" or "websocket".
IpAddressType - select the IP Address Protocols to be supported by the gateway: "ipv4"", "ipv6" or "dualstack".
integration <name> - select a suitable integration type for the backend of the gateway. The appropriate values are included in the nested block.
route <path> <integration> - define a route based on its declared path and the integration name to associate with it.
stage <name> - request that a stage be created for the gateway with the given name. This will automatically cause the gateway to be deployed to that stage.

Blanks

aws.ApiGatewayV2.Api <name> - an APIGatewayV2 blank.
Protocol - select the type of gateway you wish to create: "http" or "websocket".

aws.ApiGatewayV2.Deployment <name> - an APIGatewayV2 deployment blank.
Api - the id of the associated Api object.

aws.ApiGatewayV2.Integration <name> - an APIGatewayV2 integration blank. This provides one of potentially many ways of connecting API requests to backend services (e.g. lambdas).
Api - the id of the associated Api object.
Region - the region in which the associated lambda is to be found.
PayloadFormatVersion - the payload format version (1.0 or 2.0); only applicable to HTTP protocole.
Type - the integration type. Use
AWS_PROXY
for lambda.
Uri - the Uri of the corresponding resource, e.g. the ARN of a lambda.

aws.ApiGatewayV2.Route <path> - an APIGatewayV2 route blank. The
route
here is a means of describing the action to be performed in order to invoke the route. For http routes, this is something like
"GET /index"
; and for websocket routes it is a content expression such as
$default
.
Api - the id of the associated Api object.
Target - an appropriate URL for the recipient lambda, which can be obtained by invoking the
integrationId
method on a lambda object.

aws.ApiGatewayV2.Stage <name> - an APIGatewayV2 stage blank. This represents a stage such as
development
or
production
.
Api - the id of the associated Api object.

awa.ApiGatewayV2.VPCLink <name> - an APIGatewayV2 vpc link. This represents a link into a VPC.
Subnets - the subnets of the VPC to join.
SecurityGroups - the security groups of the VPC to apply.

aws.CertificateManager.Certificate <subject-name> - a certificate. The subject-name is the default subject name for the certificate.
Domain - a domain object that can be used for validation.
SubjectAlternativeNames - a list of alternative names to include in the certificate.
ValidationMethod - how the ownership of the domain is going to be proved. "DNS" is the only currently supported method.
ValidationProvider - the name of a mechanism for autoamatically validating the DNS name.

aws.CloudFront.CacheBehavior <name> - a cache behavior descriptor that describes how to cache certain types of content.
CachePolicy - the id of the cache policy to associate this behavior with.
PathPattern - a specific path pattern to match against the target files.
ResponseHeadersPolicy - the id of a response headers policy.
TargetOriginId - the id of the target origin (i.e. bucket).

aws.CloudFront.CachePolicy <name> - a cache policy descriptor to bundle together cache behaviors.
MinTTL - the minimum time to live for documents retrieved from the bucket.

aws.CloudFront.Distribution <name> - a cloudfront distribution.
CacheBehaviors - the (list of) cache behavior objects to associate with the distribution.
CachePolicy - the cache policy to associate with the distribution.
Certificate - the id of a certificate to identify the website.
Comment - a comment about the distribution.
DefaultRoot - the default path to extract for the distribution when faced with an index request.
Domain - the (list of) domains to accept requests for.
OriginAccessControl - an object to describe the access control mechanism for the target.
OriginDNS - a DNS name describing the origin.
TargetOriginId - an id to associate with the target origin.

aws.CloudFront.OriginAccessControl <name> - an origin access control link between a distribution and a bucket.
OriginAccessControlOriginType - the type of the OAC.
SigningBehavior - how to sign the requests to the bucket.
SigningProtocol - the protocol for signing the requests.

aws.CloudFront.ResponseHeadersPolicy <name> - a header to associate with a cache behavior.
Header - the header to set in the response.
Value - the value to set the header to.

aws.DynamoDB.Table <name> - a dynamodb table called name.
Fields - a list of field expressions, where each field is a pair of name and type, There can also be an @Key adverb attached to individual fields to identify them as key fields. The @Key adverb takes a parameter which can either be hash or range.

aws.IAM.Policy <name> - a manmaged policy with the given name.
Policy - a policy document (q.v.).

aws.IAM.Role <name> - a role for assumption.
Assume - a list of policy actions to allow the role to be assumed.
Inline - a list of policy actions which can be performed by the role once assumed.

aws.Lambda.Alias <name> - a lambda alias. This can only be used as a finder - to create a new alias, use lambda.publishVersion.

aws.Lambda.Function <name> - a lambda function.
Code - a reference to the location of the code for the lambda, probably using the aws.S3.Location interpreter.
Handler - a definition for the handler (if required by language runtime).
Role - a role to attach to the lambda, possibly defined inline with aws.IAM.WithRole.
Runtime - a string definition of the runtime, which may also be provided as "go" to define the Go runtime.
VpcConfig - a VPC configuration, which can be defined inline with aws.VPC.Config.

aws.Neptune.Cluster <name> - a neptune cluster.
MaxCapacity - the maximum capacity to associate with a serverless cluster.
MinCapacity - the minimum capacity to associate with a serverless cluster.
SubnetGroupName - the name of a subnet group to identify the subnets on which the cluster will show up.

aws.Neptune.Instance <name> - a neptune instance.
Cluster - the cluster to associate the instance with.
InstanceClass - the class of the server to run (or "serverless").

aws.Neptune.SubnetGroup <name> - a neptune subnet group. As yet, this does not support creating new subnet groups. This is just a missing feature.

aws.Route53.ALIAS <name> - a route53 ALIAS record.
AliasZone - the zone which is responsible for storing the node pointed to.
PointsTo - the name pointed to.
UpdateZone - the zone to update (i.e. insert the ALIAS record into).

aws.Route53.CNAME <name> - a route53 CNAME record.
PointsTo - the name pointed to.
Zone - the zone to update (i.e. insert the CNAME record into).

aws.Route53.DomainName <name> - a route53 domain name record. Because of the complexity of creating domain names (and the fact that it costs money), creating domain names is not supported by deployer. You can, however, find them by (domain) name.

aws.S3.Bucket <name> - an S3 bucket called name.

aws.VPC.VPC <name> - a VPC record. As yet, this does not support creating new VPC objects. This is just a missing feature.

Interpreters

These interpreters allow individual properties to be set with compound values expressed succinctly.
aws.DynamoFields - parse a scope of field descriptions as a list.
field type - a pair of field name and field type.
@Key type - a nested adverb identifying key fields. The type can either be hash or range.

aws.IAM.WithRole - parse a scope, definiting an inline role.
assume - introduce a nested scope which defines who can assume the role.
policy - introduce a nested scope which defines permissions granted to the role.
policy <name> - add a managed policy to the role (does not have a nested scope).

aws.S3.Location - identify a specify object in an S3 bucket.
Bucket - the name of the bucket.
Key - the object key in the bucket.

aws.VPC.Config - identify a specify object in an S3 bucket.
DualStack - specify if the VPC should use both IPv4 and IPv6.
Subnets - the list of subnets to associate with the VPC config.
SecurityGroups - the list of security groups to include in the VPC config.

Constants

aws.action.APIGateway.GET - "apigateway:GET"

aws.action.ec2.CreateNetworkInterface - "ec2:CreateNetworkInterface"

aws.action.ec2.DescribeNetworkInterfaces - "ec2:DescribeNetworkInterfaces"

aws.action.ec2.DeleteNetworkInterface - "ec2:DeleteNetworkInterface"

aws.action.S3.GetObject - "s3:GetObject"

aws.action.S3.PutObject - "s3:PutObject"

aws.principal.AWS - "AWS"

aws.principal.CloudFront - "cloudfront.amazonaws.com"

aws.principal.Service - "Service"

aws.resource.APIGatewayV2 - "arn:aws:apigateway:us-east-1::/apis"

aws.cond.StringEquals - "StringEquals"

aws.SourceArn - "aws:SourceArn"
menuDreamhost Module
This module is intended to support automated deployment of certificates using the Dreamhost DNS API. Unfortunately, that does not support CNAME entries beginning with underscore characters, as is required to get this to work.
menuDeveloper Guide
Ziniki Deployer is open-source, modular software.
In the current climate, it would be almost impossible for any company to consider and provide code for all possible environments and use cases for a cloud deployer. As noted elsewhere, even the cloud providers only do so through automatic generation, leaving the users to navigate the complexities of their environments.
By making Ziniki Deployer modular and extensible, and by endeavouring to provide both a reasonable user model and a simple internal model, we hope that most of our users - whom we expect to be software professionals or working in organizations with software professionals - will be able to add any features and modules that they feel are missing. They can then release these back to the community.
On the agile principles that "something is better than nothing" and "you can always start small" in order to get something working, it is possible to get minimal coins up in 20 minutes or so. Adding more complex functionality such as properties and merging takes longer, but even so it is not a significant engineering effort.
It is our intent to provide a detailed guide to the internal models and processes, and to provide worked examples of the process of adding more coins, composites and commands in due course. However, the internals are still in the process of being cleaned up and refactored at this time, so for now, adding new functionality is a more advanced study than it should be. Please contact info@ziniki.org if you want to extend Ziniki Deployer at this time.
menuReleases
The releases on this page are listed in date order. The most recent release will always be at the bottom and should be opened by default.
add_boxindeterminate_check_box2025-09-26
MacOS x86_64
deployer-darwin-x86_64.zip
Linux x86_64
deployer-linux-x86_64.zip
ModuleGit ID
deployerb640b6ff9a72039328b5c6e6120647828f86bcf1
goldenb640b6ff9a72039328b5c6e6120647828f86bcf1
coremodb640b6ff9a72039328b5c6e6120647828f86bcf1
testmodb640b6ff9a72039328b5c6e6120647828f86bcf1
deployer-module-awsdf688209fd0891970dd440b17f40eb3e4d22d58d
deployer-module-dreamhost36513fb3765db62bf0e912245f24c4b43b65dc6f
deployer-lsp46f090f2416715c4ac4d36af5fe5eeb6ee80245b