Configuring Cache Servers

BuildStream caches the results of builds in a local artifact cache, and will avoid building an element if there is a suitable build already present in the local artifact cache. Similarly it will cache sources and avoid pulling them if present in the local cache. See caches for more details.

In addition to the local caches, you can configure one or more remote caches and BuildStream will then try to pull a suitable object from one of the remotes, falling back to performing a local build or fetching a source if needed.

On the client side, cache servers are declared and configured in user configuration, and since it is typical for projects to maintain their own cache servers, it is also possible for projects to provide recommended artifact cache servers and source cache servers through project configuration, so that downstream users can download from services provided by upstream projects by default.

Setting up a remote cache

The rest of this page outlines how to set up a shared cache.

Setting up the user

A specific user is not needed, however, a dedicated user to own the cache is recommended.

useradd artifacts

The recommended approach is to run two instances on different ports. One instance has push disabled and doesn’t require client authentication. The other instance has push enabled and requires client authentication.

Alternatively, you can set up a reverse proxy and handle authentication and authorization there.

Installing the server

You will also need to install BuildStream on the cache server in order to receive uploaded artifacts over ssh. Follow the instructions for installing BuildStream here.

When installing BuildStream on the cache server, it must be installed in a system wide location, with pip3 install . in the BuildStream checkout directory.

Otherwise, some tinkering is required to ensure BuildStream is available in PATH when its companion bst-artifact-server program is run remotely.

You can install only the artifact server companion program without requiring BuildStream’s more exigent dependencies by setting the BST_ARTIFACTS_ONLY environment variable at install time, like so:

BST_ARTIFACTS_ONLY=1 pip3 install .

Command reference

bst-artifact-server

CAS Artifact Server

bst-artifact-server [OPTIONS] REPO

Options

-p, --port <port>

Required Port number

--server-key <server_key>

Private server key for TLS (PEM-encoded)

--server-cert <server_cert>

Public server certificate for TLS (PEM-encoded)

--client-certs <client_certs>

Public client certificates for TLS (PEM-encoded)

--enable-push

Allow clients to upload blobs and update artifact cache

--quota <quota>

Maximum disk usage in bytes

Default

10000000000.0

--index-only

Only provide the BuildStream artifact and source services (“index”), not the CAS (“storage”)

--log-level <log_level>

The log level to launch with

Options

warning | info | trace

Arguments

REPO

Required argument

Key pair for the server

For TLS you need a key pair for the server. The following example creates a self-signed key, which requires clients to have a copy of the server certificate (e.g., in the project directory). You can also use a key pair obtained from a trusted certificate authority instead.

openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -batch -subj "/CN=artifacts.com" -out server.crt -keyout server.key

Note

Note that in the -subj "/CN=<foo>" argument, /CN is the certificate common name, and as such <foo> should be the public hostname of the server. IP addresses will not provide you with working authentication.

In addition to this, ensure that the host server is recognised by the client. You may need to add the line: <ip address> <hostname> to your /etc/hosts file.

Authenticating users

In order to give permission to a given user to upload artifacts, create a TLS key pair on the client.

openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -batch -subj "/CN=client" -out client.crt -keyout client.key

Copy the public client certificate client.crt to the server and then add it to the authorized keys, like so:

cat client.crt >> /home/artifacts/authorized.crt

Serve the cache over https

Public instance without push:

bst-artifact-server --port 11001 --server-key server.key --server-cert server.crt /home/artifacts/artifacts

Instance with push and requiring client authentication:

bst-artifact-server --port 11002 --server-key server.key --server-cert server.crt --client-certs authorized.crt --enable-push /home/artifacts/artifacts

Note

BuildStream’s artifact cache uses Bazel’s Remote Execution CAS and Remote Asset API.

Sometimes, when using Remote Execution, it is useful to run BuildStream with just a basic CAS server, without using the Remote Asset API, but BuildStream still needs to store these to work correctly.

For this scenario, you can add the –index-only flag to the above commands, and configure BuildStream to store artifact metadata and files in a separate caches (e.g. bst-artifact-server and Buildbarn) using the type attribute of a cache server configuration.

Managing the cache with systemd

We recommend running the cache as a systemd service, especially if it is running on a dedicated server, as this will allow systemd to manage the cache, in case the server encounters any issues.

Below are two examples of how to run the cache server as a systemd service. The first, is for pull only and the other is configured for push & pull. Notice that the two configurations use different ports.

bst-artifact-serve.service:

#
# Pull
#
[Unit]
Description=Buildstream Artifact pull server
After=remote-fs.target network-online.target

[Service]
Environment="LC_ALL=C.UTF-8"
ExecStart=/usr/local/bin/bst-artifact-server --port 11001 --server-key {{certs_path}}/server.key --server-cert {{certs_path}}/server.crt {{artifacts_path}}
User=artifacts

[Install]
WantedBy=multi-user.target

bst-artifact-serve-receive.service:

#
# Pull/Push
#
[Unit]
Description=Buildstream Artifact pull/push server
After=remote-fs.target network-online.target

[Service]
Environment="LC_ALL=C.UTF-8"
ExecStart=/usr/local/bin/bst-artifact-server --port 11002 --server-key {{certs_path}}/server.key --server-cert {{certs_path}}/server.crt --client-certs {{certs_path}}/authorized.crt --enable-push {{artifacts_path}}
User=artifacts

[Install]
WantedBy=multi-user.target

Here we define when systemd should start the service, which is after the networking stack has been started, we then define how to run the cache with the desired configuration, under the artifacts user. The {{ }} are there to denote where you should change these files to point to your desired locations.

Note

You may need to run some of the following commands as the superuser.

These files should be copied to /etc/systemd/system/. We can then start these services with:

systemctl enable bst-artifact-serve.service
systemctl enable bst-artifact-serve-receive.service

Then, to start these services:

systemctl start bst-artifact-serve.service
systemctl start bst-artifact-serve-receive.service

We can then check if the services are successfully running with:

journalctl -u bst-artifact-serve.service
journalctl -u bst-artifact-serve-receive.service

For more information on systemd services see: Creating Systemd Service Files.

Declaring remote caches

Remote caches can be declared within either:

  1. The user configuration for artifacts and sources.

  2. The project configuration for artifact and sources.

Please follow the above links to see examples showing how we declare remote caches in both user configuration the project configuration, respectively.