Testing with CipherStash in CI
After you’ve integrated CipherStash into your local development environment, it’s likely running using your own personal credentials. When it’s time to submit your code for review, you’ll need to configure your Continuous Integration (CI) pipeline to export some environment variables to configure CipherStash.
This how to guide covers:
- Creating an access key
- Creating a new dataset
- Creating a dataset client
- Upload your dataset config
- Configuring CI
Creating an access key
When you’re working locally, CipherStash will work using your personal account. However, in a CI environment you need a way to access a workspace that isn’t connected directly to your personal account.
You can achieve this by creating an access key for your workspace.
You can check what workspaces you have access to using the stash workspaces
command.
stash access-keys create --workspace-id <your workspace> "CI Access Key"
After the command completes it should output the following information:
Access key created for WorkspaceId: <your workspace>
with this name: CI Access Key
#################################################
# #
# Copy and store these credentials securely. #
# #
# THIS IS THE LAST TIME YOU WILL SEE THE KEY. #
# #
#################################################
To use this key in your application for programmatic access,
add the below environment variables:
CS_CLIENT_ACCESS_KEY=<your key here>
CS_WORKSPACE_ID=<your workspace>
The CS_CLIENT_ACCESS_KEY
is the secret key that your CI will use to access CipherStash.
Note this down for the following steps.
If you suspect a secret key has been leaked, you can revoke it using the stash access-keys revoke
command.
Creating a new dataset
To create a new dataset for CI, run this command:
stash datasets create "CI Dataset" --description "Dataset for CI data"
After the command completes, it should output the following information:
Dataset created:
ID : <dataset uuid>
Name : CI Dataset
Description: Dataset for CI data
Make note of the ID
field from the dataset output — you’ll need it in the following steps.
Creating a dataset client
Once you’ve created a new dataset, you’ll need to create a client that can read from it.
stash clients create --dataset-id "<dataset uuid>"
After the command completes, it should output the following information:
Client created:
Client ID : <client id>
Name : CI Client
Description:
Dataset ID : <dataset id>
#################################################
# #
# Copy and store these credentials securely. #
# #
# THIS IS THE LAST TIME YOU WILL SEE THE KEY. #
# #
#################################################
Client ID : <client id>
Client Key [hex] : <client key>
Make note of the Client ID
and Client Key
, as they’ll be needed in the following steps.
Upload your dataset config
Now you’ve created a new dataset, you must upload your dataset config so you CI environment behaves the same as your local development environment.
stash datasets config upload --file ./dataset.yml \
--client-key "<key from last step>" \
--client-id "<client id from last step>"
Make sure you keep your local and CI dataset configs up to date — you could get unexpected results if your config gets out of date
Configuring CI
With your new access key, dataset and client, and your dataset config uploaded, you have everything you need to get CipherStash running in CI.
The last step is to update your CI’s pipeline configuration to export the following environment variables from the previous steps:
CS_WORKSPACE_ID
: your workspace idCS_CLIENT_ACCESS_KEY
: key from the access key stepCS_CLIENT_ID
: client id from the dataset client stepCS_CLIENT_KEY
: client key (in hex) from the dataset client step
You may also need to export CS_VITUR_HOST
if your workspace is in a region other than ap-southeast-2
.
These variables should be treated as secrets, and shouldn’t be checked in to your source control.
The process of adding these secrets will differ depending on your CI provider, so check out the relevant docs to see how:
Now you should be good to go!
For more information on best practices around clients and datasets check out the reference docs.