Mojo README
Table Of Contents
- Overview
- Usage
- Stages
- Workspaces
- Containers
- Setup
- Secrets
- Phases
- Debug-Logs
- Exec-On-Failure
- Specs
- Spec URLs
- Logging
- Continuous Integration
- FAQ
Overview
Mojo is a deployment workflow system for Juju.
Mojo is a continuous integration and continuous delivery system for Juju-based services. It is intended to replace custom scripts and manual manipulation used to deploy and maintain services.
The power of Mojo is versatility. By creatively organizing phases and configuration files many deployment permutations can be tested.
Usage
Mojo provides several commands, each performing one phase of a Juju deployment process. These commands reference a Mojo spec for details about the environment, charms, and deployment processes.
In addition to commands which control your deployment, Mojo includes tools for working with specs, preparing your deployment, and for integrating with CI systems like Jenkins.
Projects
A Mojo project is the directory where deployment artifacts are assembled for testing.
Typically there is one project per juju environment. There can be many workspaces per project.
The default MOJO_ROOT depends on the type of container being used (see below for details about the different container types). For LXC containers the default is /srv/mojo and for LXD containers or containerless projects it is ~/.local/share/mojo/, so projects are located in:
/srv/mojo/$MOJO_PROJECT
or
~/.local/share/mojo/$MOJO_PROJECT
The LOCAL directory is where secrets should be placed for use by the deployment (see the section about secrets below):
/srv/mojo/LOCAL
or
~/.local/share/mojo/LOCAL
The Mojo secrets phase copies data from the LOCAL directory to the workspace local directory.
The ROOTFS directory is the container rootfs.
/srv/mojo/$MOJO_PROJECT/$MOJO_SERIES/ROOTFS
or
~/.local/share/mojo/$MOJO_PROJECT/$MOJO_SERIES/ROOTFS
where $MOJO_SERIES is precise, trusty, etc.
Stages
Stages are arbitrarily-defined identifiers which distinguish different deployment environments to be used with the same spec. Typical stages are devel, staging, and production, but these are just conventions and any identifier can be used here to reflect your desired purposes. For example, you may have multiple production environments reflecting different cloud provider regions, so each of these could have its own stage.
Workspaces
A Mojo workspace is a directory used for assembling a particular version of the specification being tested. Typically, a workspace is named for the revision number of the spec and the stage. i.e 23-production:
$MOJO_ROOT/$MOJO_PROJECT/$MOJO_SERIES/$MOJO_WORKSPACE
Workspace layout:
The specification is pulled from a repository or directory to the spec directory:
$MOJO_ROOT/$MOJO_PROJECT/$MOJO_SERIES/$MOJO_WORKSPACE/spec
The build directory is where collect phase stores the charms and payload data by default. The build phase script should point to the contents in this directory:
$MOJO_ROOT/$MOJO_PROJECT/$MOJO_SERIES/$MOJO_WORKSPACE/build
The charms directory is where the repo phase setups up the charm repository. The deploy phase deploys charms from this location:
$MOJO_ROOT/$MOJO_PROJECT/$MOJO_SERIES/$MOJO_WORKSPACE/charms
The Mojo secrets phase copies data from the LOCAL directory (see above) to the workspace local directory:
$MOJO_ROOT/$MOJO_PROJECT/$MOJO_SERIES/$MOJO_WORKSPACE/local
Containers
A Mojo project may also have a container associated with it. Currently mojo may be containerless, use an LXC container or an LXD container. This is decided at project creation time using the --container switch to project-new.
LXC/LXD: If a project is setup using a container by default, all build phases will be run within the container with no network access. The default can be changed using lxc=False in the options. Other script phases can be run within the container using lxc=True. Apt packages will be installed in the container during the builddep phase. NOTE: LXD is only supported on xenial and later.
containerless: If the project is setup containerless then all mojo commands are run directly on the host with whichever libraries, packages and resources the host may have.
Setup Commands
Create a new project:
mojo project-new $MOJO_PROJECT -s $MOJO_SERIES --container [lxc, lxd, containerless]
# eg:
mojo project-new my-stack-name -s trusty -c lxd
List all projects:
mojo project-list
Delete a project:
mojo project-destroy $MOJO_PROJECT
Create a new workspace:
mojo workspace-new --project $MOJO_PROJECT -s $MOJO_SERIES $SPEC_URL $MOJO_WORKSPACE
# eg:
mojo workspace-new --project my-stack-name -s trusty lp:~user/mojo-specs/my-stack-name stack-test-wip
List all workspaces:
mojo workspace-list --project $MOJO_PROJECT -s $MOJO_SERIES
# eg:
mojo workspace-list --project my-stack-name -s trusty
Delete a workspace:
mojo workspace-destroy --project $MOJO_PROJECT -s $MOJO_SERIES $MOJO_WORKSPACE
# eg:
mojo workspace-destroy --project my-stack-name -s trusty stack-test-wip
Environment Variables
To shorten commands, you can configure environment variables as follows:
MOJO_PROJECT=my-project
MOJO_SERIES=trusty
MOJO_WORKSPACE=test-wip
MOJO_STAGE=devel
MOJO_SPEC=/path/to/spec/repo
MOJO_LOGFILE=/path/to/log/file
With these set, rather than doing:
mojo run --project my-project --series trusty --stage devel /path/to/spec/repo test-wip
You can simply do:
mojo run
Secrets
Secrets play a major role in the complexity of deploying a production environment and therefore need to be tested along with the rest of the stack.
Secrets are any data required by the spec which cannot be stored in the spec repository, including TLS keys and certificates, access tokens, database passwords, etc., and any configuration items which are dependent upon the target infrastructure, such as public IP addresses and persistent volume ids.
Although a goal of Mojo is to organize as much as possible in the specification repository, this does not apply to secrets. This is to ensure that we can share knowledge of the specification with as many people as possible, but restrict access to secrets to only those deploying into a specific environment (such as the production instance of a given service).
So, by design, there is a manual element to secrets handling in Mojo. Mojo will assume secrets can be found at this location:
$MOJO_ROOT/LOCAL/$MOJO_PROJECT/$MOJO_STAGE
The Mojo secrets phase will copy secrets from this location to the workspace local directory:
$MOJO_ROOT/$MOJO_PROJECT/$MOJO_SERIES/$MOJO_WORKSPACE/local
Be sure to add the secrets phase to your manifest early enough to make secrets available to the rest of the phases. This makes the secrets available inside the LXC/LXD container. They are then available to the later phases in $MOJO_LOCAL.
Types of secrets
There are two types of secrets which are commonly included in a specification:
Secret options to a charm configuration. An example would be a charm which requires Swift credentials to be able to upload data into Swift.
This requires the secrets to be available during a deploy phase. Fortunately, juju-deployer can use multiple deployer config files. We can pass a second juju-deployer config file by using the
local=
setting in the deploy phase, and having the file we specify here include the secret options.Place a juju-deployer config file with secrets in the following location:
$MOJO_ROOT/LOCAL/$MOJO_PROJECT/$MOJO_STAGE/$SECRETS_FILE
$SECRETS_FILE can be an arbitrary file name (
secrets
is a good file name for clarity).Example secrets file (called
secrets
):mystack: services: myservice: options: db-username: myDbUsername db-password: myDbPass
Example services file (called
services
):mystack: series: trusty services: myservice: charm: mycharm options: non-secret-option-1: value1 non-secret-option-2: value2 # It is OK to have secret content which is base64 included in # the services file (i.e. certificates, keys) ssl_key: include-base64://{{local_dir}}/mysite.key
Example manifest deploy line:
deploy config=services local=secrets
Juju-deployer will merge the two files, so it will effectively be deploying:
mystack: series: trusty services: myservice: charm: mycharm options: db-username: myDbUsername db-password: myDbPass non-secret-option-1: value1 non-secret-option-2: value2 # It is OK to have secret content which is base64 included in # the services file (i.e. certificates, keys) ssl_key: include-base64://{{local_dir}}/mysite.key
Secret files which are to be included in the deployment in some other way. This could be TLS keys, or a script which includes a database password that will be manually copied to an application server.
As you can see above, we have a TLS key (
$MYSITE.key
) which is included in the deployment by turning the contents of a particular file into base64 output, which the charm then converts back to a file as part of the deployment. As a result, we'd want to put$MYSITE.key
into our secrets directory -$MOJO_ROOT/LOCAL/$MOJO_PROJECT/$MOJO_STAGE/
- and then by running the secrets phase, that file is copied to{{local_dir}}
to be used as part of this charm deployment.
Phases
Mojo arranges deployments into several phases in order to ensure their repeatability. By breaking a deployment into small parts, Mojo allows you to easily build separate staging and production (or any other) environments whose differences are carefully controlled. You can also gather important information about each phase of your deployment to help you improve the quality and stability of your service.
Each phase may be performed any number of times, in any order. The typical progression for a Mojo deployment is: collect, builddeps, build, repo, services, relations, verify.
All phases require the --project, --series, --workspace and --stage switches.
These values can also be provided by the environment variables $MOJO_PROJECT
,
$MOJO_SERIES
, $MOJO_WORKSPACE
and $MOJO_STAGE
Phase Commands
mojo run
The one command to rule them all. The mojo run
command pulls down the spec url,
reads the manifest and begins executing each phase in order per the manifest.
mojo run --project $MOJO_PROJECT --series $MOJO_SERIES --stage $MOJO_STAGE $SPEC_URL $MOJO_WORKSPACE
# eg:
mojo run --project my-stack-name --series trusty --stage devel lp:~user/mojo-specs/my-stack-name stack-test-wip
mojo secrets
The secrets phase copies secrets files from the LOCAL directory (Default:
$MOJO_ROOT/LOCAL/$MOJO_PROJECT/$MOJO_STAGE
) to the workspace local directory
($MOJO_WORKSPACE_DIR/local
) where they can be accessed by the
remaining phases.
Options:
config=SECRETS_DIRECTORY
Directory where secrets are stored
Default: $MOJO_ROOT/LOCAL/$MOJO_PROJECT/$MOJO_STAGE
See the above section about secrets for more information.
mojo collect
The collect phase gathers all code and other resources required to build your service via codetree. Some services may be composed only of charms, while others may collect static content, local configuration, or code which must be compiled.
The collect configuration file is a Jinja2 template that is used to
generate a valid codetree configuration file. The format is the
following, where destination is relative to $MOJO_WORKSPACE_DIR/build
and source is a remote resource or a local directory. Example:
destination source1
../spec/destination source2
Options:
config=COLLECT_CONFIG_FILENAME
The name of the codetree collect configuration file
workers=NUMBER_OF_THREADS
The number of threads codetree creates to retrieve all the code
If not specified, codetree will create a maximum of the number
of sources specified in collect configuration file
The following variables will be set in the template: project
,
workspace
, series
, stage
, stage_name
, and stage_parent
.
Example:
{% if stage_name == "devel" -%}
nrpe cs:nrpe;channel=candidate
{% else -%}
nrpe cs:nrpe
{% endif -%}
mojo builddeps
The builddeps phase prepares the container by installing any required packages, apt repositories or apt repository gpg keys. It is passed the package, repos, and repo_keys options using a comma-delimited list.
Options:
repo_keys=APT_GPG_KEY[,APT_GPG_KEY]
Comma-delimited list of apt repository GPG key file names
repos=REPO[,REPO]
Comma-delimited list of apt repositories (See man apt-add-repository)
packages=PACKAGE_NAME[,PACKAGE_NAME]
Comma-delimited list of packages.
Note that all packages are installed with --no-install-recommends, so you must be explicit about dependent packages.
mojo script
A script phase executes a user-defined script. Use scripts to manipulate your Juju units directly or to set up external resources.
The script phase can be used for testing juju upgrade-charm $CHARM
after an
initial deploy has completed.
A script phase run outside the container may be needed to access nova or juju commands without the need to copy nova credentials into the container.
Environment variables are injected into the script environment (Build and verify are subclasses of script and get these too).
MOJO_PROJECT = Mojo project name
MOJO_WORKSPACE = Workspace name
MOJO_WORKSPACE_DIR = Workspace path
MOJO_BUILD_DIR = Workspace build dir
MOJO_REPO_DIR = Workspace repo dir
MOJO_SPEC_DIR = Workspace spec dir
MOJO_LOG_DIR = Workspace log dir
MOJO_LOCAL_DIR = Workspace local dir
MOJO_STAGE = Stage
MOJO_STAGE_NAME = Shortened Stage, e.g. mystage/production -> production
MOJO_STAGE_PARENT = The opposite of MOJO_STAGE_NAME, e.g. mystage/production -> mystage
Options:
config=SCRIPT_NAME
The name of the script to be executed
lxc=True/False
Whether to execute the script inside the lxc or lxd container
Defaults to False, except for the build phase which forces
lxc=True.
debug-logs=debug-logs
File to search for defining debug-log config.
debug-logs-stages-to-exclude=production
Debug-log won't be run when stages matching this name are used
exec-on-failure=exec-on-failure
Name of stage-specific script to execute in case of failure
KEY=VALUE
All KEY=VALUE paramaters will be set as environment variables for use
by the script
See this section for syntax and more details on debug-logs. See this section for syntax and more details on exec-on-failure.
mojo build
A build phase is nearly identical to a script phase, except that network access in the container is prohibited during execution. This ensures that all required resources are documented and available during collect.
Options:
config=BUILD_SCRIPT_NAME
The name of the build script to be executed
mojo repo
The repo phase assembles a local Juju charm repository from a charm-to-directory mapping.
By building a separate repo (rather than using the build directory), Mojo allows the developer to structure the build directory in any way s/he likes. The charm map also becomes an explicit list of the charms expected to be used in a deployment.
The repo configuration file is a text file of space-delimited key-value pairs,
one per line. Each key is the name of a charm. Each value is the path, relative
to $MOJO_SPEC_DIR/build
, where the charm is located. If the charm name and
path are the same, the path may be omitted.
Options:
config=REPO_CONFIG_FILENAME
The name of the repo configuration file
mojo deploy
During a deploy phase, charms are deployed and/or related via juju-deployer.
The deploy configuration file is a valid juju-deployer configuration file. The juju-deployer configuration file can be templated. Jinja2 is used to produce the final result. The following variables are available in the template:
workspace: $MOJO_WORKSPACE
series: $MOJO_SERIES
spec_dir: The spec directory path
local_dir: The local directory path where secrets are found
build_dir: The build directory path
repo_dir: The path to the root of the charm repository
charm_dir: repo_dir with the project series appended, i.e. $MOJO_REPO_DIR/$MOJO_SERIES
stage: $MOJO_STAGE directory path i.e. $TEAM/$SERVICE/$STAGE
stage_name: basename($MOJO_STAGE) i.e. Just $STAGE
stage_parent: dirname($MOJO_STAGE) i.e. Just $TEAM/$SERVICE
The deploy phase is often separated into services and relations which gives the tester a better idea where failures are occurring.
Options:
config=DEPLOY_CONFIG_FILENAME
The name of the deploy juju-deployer configuration file. Can supply
multiple times, comma separated.
local=SECRETS_FILENAME
The name of the local juju-deployer configuration file. Found in the
workspace local directory. It contains secrets or other information
which cannot be included in the spec repository.
juju=JUJU_ENV
The juju environment to deploy in from environments.yaml.
delay=DELAY_IN_SECS
The juju deployer delay between instance deploys in seconds.
Default: 0 seconds
status-timeout=TIMEOUT_IN_SECS
We check juju status after each deploy phase. How long in seconds
before we timeout if instances remain in a pending status.
Default: 1800 (30min)
target=JUJU DEPLOYER CONFIGURATION TARGET
If the deploy configuration has multiple target this can be used to specify which to run.
timeout=TIMEOUT_IN_SECS
Overall timeout for the juju-deployer deployment.
Default: 2700 (45min)
retry=RETRY_COUNT
Number of times to let juju-deployer retry failed units (-r).
Default: unset (no retry)
wait=True/False
Wait for the environment to reach steady state.
Default: False
max-wait=MAX_WAIT_IN_SECS
How long to wait for the environment to reach steady state.
Default: 300 (5 minutes)
apply-config-changes=True/False
Update config items in environment's services to match the config.
Default: False
additional-ready-states=ADDITIONAL_READY_STATES
Additional states that should be considered as ready when checking juju status after a deploy
phase. Also applies within a deploy phase after configuring applications/services but before
configuring relations. For example: blocked,waiting
Default: unset (no additional states)
status-settle-wait=WAIT_IN_SECS
How long to wait between successful exit of juju-deployer and when
the juju status check is run.
Default: 0 (No wait)
Subcommands:
mojo deploy-show
Show rendered juju-deployer configuration. Render the configuration with
all options and inheritance.
Options are the same as the mojo deploy phase above
mojo deploy-diff
Show juju-deployer diff. Generate a delta between a configured deployment
and a running environment
Options are the same as the mojo deploy phase above
The following additional command-line options are supported:
--apply-config-changes
Update config items in environment's services to match the config.
--preview-config-changes
Display config items that differ from those in the live environment.
These may also be invoked in manifests as phases of the same
name, and supplied with the same options as the "deploy" phase to
specify the configuration to use.
mojo bundle
During a bundle phase, charms are deployed and/or related via juju's bundle functionality. This requires at least juju 2.3 to function properly and will not allow itself to be called with a lower client version.
The configuration is a valid juju bundle, the bundle may be templated, Jinja2 is used to produce the final configuration file. The following variables are available in the template:
workspace: $MOJO_WORKSPACE
series: $MOJO_SERIES
spec_dir: The spec directory path
local_dir: The local directory path where secrets are found
build_dir: The build directory path
stage: $MOJO_STAGE directory path i.e. $TEAM/$SERVICE/$STAGE
stage_name: basename($MOJO_STAGE) i.e. Just $STAGE
stage_parent: dirname($MOJO_STAGE) i.e. Just $TEAM/$SERVICE
A bundle phase may be separated into services and relations to allow the user a better idea of where failures may be occurring.
Options:
config=DEPLOY_CONFIG_FILENAME
The name of the deploy juju bundle file. Can supply
multiple times, comma separated.
local=SECRETS_FILENAME
The name of the local juju bundle file. Found in the
workspace local directory. It contains secrets or other information
which cannot be included in the spec repository.
status-timeout=TIMEOUT_IN_SECS
We check juju status after each phase. How long in seconds
before we timeout if instances remain in a pending status.
Default: 1800 (30min)
wait=True/False
Wait for the environment to reach steady state.
Default: False
max-wait=MAX_WAIT_IN_SECS
How long to wait for the environment to reach steady state.
Default: 300 (5 minutes)
additional-ready-states=ADDITIONAL_READY_STATES
Additional states that should be considered as ready when checking juju status after a deploy
phase.
Default: unset (no additional states)
status-settle-wait=WAIT_IN_SECS
How long to wait between successful exit of juju and when
the juju status check is run.
Default: 0 (No wait)
map-machines=existing[,X=X',Y=Y',Z=Z']
By default, new machines will be created by the bundle and associated to the machines' ID specified
in the bundle configuration file.
Using this option allows the bundle to use existing machines in the model.
Additionally map machines X (in the bundle configuration) to X' (the current machine ID in the model).
See https://juju.is/docs/charm-bundles for more details.
Subcommands:
mojo bundle-show
Show rendered bundles. Render the configuration with all options and inheritance.
Options are the same as the mojo bundle phase above
mojo bundle-diff
Show juju --dry-run output. Generate a delta between a configured deployment
and a running environment, options are the same as the mojo bundle phase above
These may also be invoked in manifests as phases of the same
name, and supplied with the same options as the "bundle" phase to
specify the configuration to use.
It is important to note that, unlike deployer phases, bundle phases will apply any configuration changes in the bundles against the live environment. For example, if a user were to deploy an apache2 service, then change the charm options for apache2 in the bundle and re-run mojo, the changed options would be applied to the deployed apache2 service. This is analogous to the deployer phase's apply-config-changes being set to True.
Also important to be aware of is how bundles specify charm locations. Where juju-deployer was able to deduce the location of charms from the mojo environment variables, bundles require that you explicitly specify a path to a local charm if that's what you intend to deploy, like so:
charm: {{ charm_dir }}/haproxy
This specifies the full path to a multi-series charm fetched locally by codetree. Of course you can also use charm store charms by simply prepending your charm name with cs: in the bundle and this is implicit if no path is specified, so bundles default to deploying charm store charms.
mojo sleep
Inject wait time between phases. Used when a juju command such as upgrade-charm has been executed and time is needed before the next phase begins.
config=SECONDS_TO_SLEEP
Number of seconds to sleep
mojo verify
The verify phase is also a particular type of script phase. During verify, tests are performed to ensure that the previous phases have resulted in a working service. When a verify step succeeds, the deployment is considered as success.
Options:
config=VERIFY_SCRIPT_NAME
The name of the verify script to be executed
lxc=True/False
Whether to execute the script inside the lxc or lxd container
retry=Integer
How many times to retry (defaults to 1, i.e. doesn't retry)
sleep=Integer
How many seconds to wait between retries (defaults to 5)
debug-logs=debug-logs
File to search for defining debug-log config.
debug-logs-stages-to-exclude=production
Debug-log won't be run when stages matching this name are used
exec-on-failure=exec-on-failure
Name of stage-specific script to execute in case of failure
See this section for syntax and more details on debug-logs. See this section for syntax and more details on exec-on-failure.
mojo nagios-check
Run all nagios checks on all units in an environment. This phases connects to
every unit in an environment and runs nagios checks as configured in
/etc/nagios/nrpe.d/
. Any non-zero output (i.e. warning, critical, unknown)
will be considered a failure. Any machines with no services running will be
logged as a warning but not considered a failure. However, any machines with
services running, but nagios not installed will be considered a failure.
Options:
skip-checks='check_etc_bzr,check_log_archive_status'
The default checks to skip. Should be a comma-separated list with no spaces.
skip-checks-extra=''
Additional checks to skip. Should be a comma-separated list with no spaces.
retry=Integer
How many times to retry (defaults to 1, i.e. doesn't retry)
sleep=Integer
How many seconds to wait between retries (defaults to 5)
services=''
Which services to run nagios checks against (defaults to all). Should
be a comma-separated list with no spaces.
mojo juju-check-wait
Wait for juju environment to reach a steady state, or errors if there is a problem with the environment.
Options:
status-timeout=TIMEOUT_IN_SECS
We check juju status after each deploy phase. How long in seconds
before we timeout if instances remain in a pending status.
Default: 1800 (30min)
wait-for-workload=True/False
Whether to wait for the workload to reach an active state. If set
to False it does not pay attention to any workload state except
'error', because charms are not required to set their state
meaningfully.
Default: False
mojo stop-on
The stop-on phase is also a particular type of script phase. If you want to run a script that will optionally stop execution of further phases in a manifest, you can designate a particular return code to do so. Any other return code will raise an exception, or a zero return code will continue execution of later phases in the manifest.
Options:
config=STOP_ON_SCRIPT_NAME
The name of the stop-on script to be executed
return-code=Integer(1-255)
The return code to stop further execution of the manifest on. Any other
return code will raise an exception, or a zero return code will
continue execution of later phases in the manifest
Example:
stop-on return-code=99 config=check-for-changes
Debug-Logs
In some phases (such as 'script' and 'verify'), Mojo supports a debug-logs option. In case of a failure of the associated phase, Mojo will look for a config file matching the name of the "debug-logs" option (by default, "debug-logs"). This config file can be stage-specific or the same for all stages in a specification.
The format of the file is as follows:
servicename:
config-files: ['/path/to/config-files/*', '/path/to/another/file']
directory-listings: ['/path/of/directories/to/list']
logs: ['/path/of/logfiles/*.log','/path/of/another/logfile']
servicename2:
config-files: ['/path/to/config-files/*conf']
directory-listings: ['/path/of/directory/to/list']
logs: ['/path/of/logfiles']
If Mojo is not running a stage that matches debug-logs-stages-to-exclude (which defaults to production), it will iterate through this file, connecting to each service and printing the contents of specified configuration files, listing directories as appropriate, and printing the last 200 lines of any log files specified. Files and directory names are expanded using python glob.
Exec-On-Failure
In script, verify and deploy phases Mojo supports an exec-on-failure option. In case of a failure of the associated phase, Mojo will look for a script matching the name of the "exec-on-failure" option (by default, "exec-on-failure"). This config file can be stage-specific or the same for all stages in a specification.
The script must be executable, and will be run outside of the project container.
Specs
A Mojo spec is simply a repository of files. The spec includes a manifest file which is used by mojo run to run all configured phases.
Mojo is to specs as juju is to charms. The specs are where the real power of mojo comes to light. By creatively arranging the phases and corresponding configurations and scripts one can test almost anything. Initial deployments, code updates, charm updates, and order of relations all can be tested using various combinations of phases and configuration.
Creating a new Spec
The helper command mojo new-spec dir-name
will create a new spec based on a
commented template. The spec will be created at the path dir-name or in the
named directory of your current working dir if only a name is given.
It is highly recommended that you add your spec directory to source control using your preferred source control system.
spec layout
There are two supported approaches for structuring a spec repository:
Single spec in one repository
At the top of the directory is the manifest which lists which phases are executed in which order. There are directories for each stage - see the above section about stages for more information.
Inside the stage directories are configuration files and scripts which correspond to the phases listed in the manifest. Mojo will check the $MOJO_STAGE directory recursively.
Example spec layout:
./manifest ./devel/collect ./devel/build ./devel/services ./devel/relations ./devel/verify ./shared/helper/build ./shared/helper/collect
Multiple specs in one repository
Example spec layout:
./team1 ./team1/service1 ./team1/service1/manifest ./team1/service1/devel/collect ./team1/service1/devel/build ./team1/service1/devel/services ./team1/service1/devel/relations ./team1/service1/devel/verify ./team2 ./team2/service2 ./team2/service2/manifest ./team2/service2/devel/collect ./team2/service2/devel/build ./team2/service2/devel/services ./team2/service2/devel/relations ./team2/service2/devel/verify
This example shows two specs within the same repository. The first would be called with a stage of
team1/service1/devel
and the second would be called with a stage ofteam2/service2/devel
.The nesting of directories can be arbitrary based on your needs. If you only need one level of nesting you could have
service1/devel
or if you need three levels of nesting you could havedepartment1/team1/service1/devel
.
The spec manifest
A manifest defines the deployment progression. It lists phases, one per line, along with optional key=value pairs pointing to scripts or configuration files in the stage directory.
A simple example:
secrets
collect
build
deploy config=services
deploy config=relations
verify
A more interesting example including an upgrade charm script:
builddeps repo_keys=ubuntu-archive-keyring.gpg repos="deb http://ubuntu-cloud.archive.canonical.com/ubuntu trusty-updates/cloud-tools main" packages=bzr,make,python-virtualenv,libpq-dev,python-dev,zip
secrets
collect
build
repo
script config=predeploy lxc=True MYVAR=True
# Deploy services requiring persistent storage
deploy config=services-with-storage
# Deploy main services
deploy config=services local=secrets
# Relate subordinate charms
deploy config=subordinate-relations
# Relate charms
deploy config=relations
script config=postdeploy MYVAR=False
verify
# Rebuild with updated charm or payload
collect config=collect-upgrade
build
repo
# Upgrade charms
script config=upgrade-charm
verify
Any comment in a manifest (a line beginning with a single '#') will be included in the logged output when the spec is run. Comments beginning with a double-hash ('##') will not be included in the logged output.
Manifests support includes. So, our manifest might look like this:
collect
repo
deploy config=services
include config=other-manifest
We would then have a file called other-manifest
in the same directory as
manifest
and it might contain this:
script config=post-deploy
verify
mojo run
would do the equivalent of running a manifest like this:
collect
repo
deploy config=services
script config=post-deploy
verify
But we could also mojo run -m other-manifest
, which would only
run the following phases:
script config=post-deploy
verify
Spec URLs
It is recommended that your spec be kept in a source control repository. The spec_url
argument or MOJO_SPEC
environment variable can be of any source url format supported by
codetree
including bzr repositories, git repositories or even local directories.
The codetree defaults are generally relied upon and normally this is good for example
it is faster to avoid overwrite for a workspace used multiple times. However if needed
you can add codetree style source options. For example each line below is a valid
spec_url
lp:myapp-woohoo
lp:myapp2-woohoo;revno=44
/home/me/my_new_yet_to_be_commited_spec
git://github.com/myapp
git://github.com/mybetterapp;overwrite=true
Logging
You can either use a command line option or an environment variable to set a logfile location for Mojo. This will mean the output of all commands are sent to that logfile in addition to being output on the console. This can be very useful for referring back to later, or for exposing the results of Mojo commands to others.
export MOJO_LOGFILE=/tmp/mojo.log
mojo run
Alternatively:
mojo -L /tmp/mojo.log run
The default option is to log to:
$MOJO_ROOT/$MOJO_PROJECT/$MOJO_SERIES/$MOJO_WORKSPACE/log/mojo.log
Continuous Integration
contrib/jenkins/mojo-to-jenkins
The mojo-to-jenkins script uses the contrib/jenkins/job-templates to generate jenkins jobs. Jenkins can then execute mojo runs against the given specification.
cd contrib/jenkins/mojo-to-jenkins
./mojo-to-jenkins $MOJO_PROJECT $SPEC_URL
FAQ
Should I run mojo with sudo?
No, mojo calls sudo for you, so you should never have to run mojo with sudo. Mojo itself will call subcommands with sudo and you will be asked for your sudo password. The need for your sudo password can be completely removed by installing the contrib/99-mojo-sudoers file and adding your user to the mojo group.
For automated systems, such as jenkins, that cannot use sudo passwords, the contrib/99-mojo-sudoers file is necessary.
Why should or shouldn't I use a container with my project?
The benefit of using a container is for build or script phases that require libraries, packages, or any other resources that do not exist on the host machine. By using a container one can test the build resources themselves without affecting the host machine's functionality. By using a contianer one may test the build or script phases in a different version of the OS than the host. By restricting network access during the build phase one mimics production environments that may have restricted egress firewall rules.
If your mojo spec does not have a build phase or does not require any libraries, packages or resources not already available on the host during any of the script phases then a containerless project is the optimal choice. A containerless project uses less overhead and takes up less disk space.
Why do I get
/var/lib/lxc/$MOJO_PROJECT/config
permission denied?Depending on which version of lxc is installed it may be necessary to widen the permissions on /var/lib/lxc. If one sees permission denied on /var/lib/lxc/$MOJO_PROJECT/config then chmod the lxc directory:
chmod 755 /var/lib/lxc
Please be aware of this launchpad bug.
Where should I store secrets?
Place secrets in
$MOJO_ROOT/LOCAL/$MOJO_PROJECT/$MOJO_STAGE/
. Add the mojo secrets phase early in the manifest file. Please see the above section on secrets for more details.How can I use shared code for multiple specs?
The collect phase can be used to collect code and store it in any location within the workspace relative to
$MOJO_WORKSPACE_DIR/build
. By placing shared code in the$MOJO_SPEC_DIR
it creates a$MOJO_STAGE
which can be run withmojo $CMD --stage $MOJO_STAGE
. Example:# Collect file ../spec/shared $SHARED_BRANCH # Command mojo run --stage shared
How can I iteratively test a spec without bzr committing each change to a repository?
Simply change your spec_url to the local directory you are working out of.