Overview of deployment flows¶
Stage 1: Pushing a component to the component artifact repository¶
MACH composer is primarily used to deploy components in an individual site, and provide it with the right context (settings, endpoints, etc.).
Components themselves are responsible for publishing their own artifacts into an artifact repository (which can be a simple S3 bucket). And usually that is implemented by configuring a CI/CD pipeline that manages that automatically.
An artifact is usually a zip file containing a serverless function, i.e.
my-component-vXYZ.zip. These can be conveniently generated using the
serverless framework, using
sls package, but can also be built using a
different process. Next to serverless functions, it is increasingly common to
deploy Docker containers through MACH composer, using the serverless container
hosting options in cloud providers (i.e. AWS Fargate).
A component always contains a Terraform module
Next to the component publishing its artifact into an artifact repository, it
should also provide the necessary terraform resources for the component.
Usually, components have a
/terraform directory in the root of the
component, containing the Terraform module, which is the entrypoint for MACH
Composer. And MACH composer in turn leverages
Terraform's 'modules sources'
functionality to 'pull together' different modules from different Git
MACH composer ☁️ provides a component registry API
To improve the stability of your MACH composer pipelines, it is often not reliable to rely on updating component versions based on GitHub commits. In order to improve this, we are introducing a Component Registry as part of MACH composer ☁️.
Stage 2: MACH composer deployment¶
MACH composer itself is primarily a code generator that generates the required Terraform code per site in the YAML configuration. Optionally (though recommended) MACH composer decrypts the YAML file through SOPS.
Running MACH composer in CI/CD is a best practice
For production deployments we recommend running MACH composer in a CI/CD pipeline. This because running it requires access to sensitive resources and should be secured properly, as well as providing a good audit-trail about who deployed what at which time.