Nodepool configuration¶
File structure¶
When the config-check
and config-update
jobs are run on git events occurring on the config repository, the following file structure is expected:
/
|_ nodepool/
|_ nodepool.yaml
|_ nodepool-builder.yaml
|_ dib-ansible/
|_ inventory.yaml
|_ my-cloud-image-1.yaml
Note
If the file structure is missing or partial the jobs will skip the related configuration check and update.
The file nodepool.yaml
holds the labels and node providers configuration. This configuration is used by the nodepool-launcher
process.
The file nodepool-builder.yaml
holds the diskimages and provider's image configuration. This configuration is used by the nodepool-builder
process.
The dib-ansible
directory is used by nodepool-builder
as the images build definition directory.
Configuring Nodepool launcher¶
Danger
Please take care not to override any of the base settings!
The configuration provided in nodepool/nodepool.yaml
will be appended to the base configuration.
What happens during a config-update
job?
When a change to nodepool's configuration is merged, the following script is run to update the pods running nodepool:
#!/bin/sh
set -ex
# The script expects by default to find the 'nodepool.yaml' file in
# the config repository. However the same for nodepool-builder the script
# must find the 'nodepool-builder.yaml' in the config repo. Thus, this script
# can be parameterized via the NODEPOOL_CONFIG_FILE environment variable.
NODEPOOL_CONFIG_FILE="${NODEPOOL_CONFIG_FILE:-nodepool.yaml}"
# Generate the default tenants configuration file
cat << EOF > ~/nodepool.yaml
---
webapp:
port: 8006
zookeeper-servers:
- host: zookeeper
port: 2281
zookeeper-tls:
ca: /tls/client/ca.crt
cert: /tls/client/tls.crt
key: /tls/client/tls.key
# images-dir is mandatory key for nodepool-builder process
images-dir: /var/lib/nodepool/dib
build-log-dir: /var/lib/nodepool/builds/logs
EOF
if [ "$CONFIG_REPO_SET" == "TRUE" ]; then
# A config repository has been set
# config-update usage context required a specific git ref
REF=$1
/usr/local/bin/fetch-config-repo.sh $REF
# Append the config repo provided config file to the default one
if [ -f ~/config/nodepool/${NODEPOOL_CONFIG_FILE} ]; then
cat ~/config/nodepool/${NODEPOOL_CONFIG_FILE} >> ~/nodepool.yaml
fi
fi
echo "Generated nodepool config:"
echo
cat ~/nodepool.yaml
cp ~/nodepool.yaml /etc/nodepool/nodepool.yaml
For each provider used in the Nodepool launcher configuration, nodepool must be able to find the required connection credentials. Please refer the deployment documentation about setting up provider secrets.
Use an official cloud image within an OpenStack cloud¶
There is a simple way to configure Nodepool to use a cloud image with Zuul's SSH key, so that Zuul can run jobs on images in an OpenStack cloud.
- Edit
nodepool/nodepool.yaml
to add labels and providers:
labels:
- name: cloud-c9s
min-ready: 1
providers:
- name: default
cloud: default
clean-floating-ips: true
image-name-format: '{image_name}-{timestamp}'
boot-timeout: 120 # default 60
cloud-images:
- name: cloud-centos-9-stream
username: cloud-user
pools:
- name: main
max-servers: 10
networks:
- $public_network_name
labels:
- cloud-image: cloud-centos-9-stream
name: cloud-cs9
flavor-name: $flavor
userdata: |
#cloud-config
package_update: true
users:
- name: cloud-user
ssh_authorized_keys:
- $zuul-ssh-key
- Save, commit, propose a review and merge the change.
- Wait for the
config-update
job to complete. - If the
min-ready
property is over 0, you should see in the Zuul web UI, the new label and a ready node under thelabels
andnodes
pages.
Tip
If you encounter issues, please refer to the troubleshooting guide.
Configuring Nodepool builder¶
Danger
Please take care not to override any of the base settings!
The configuration provided in nodepool/nodepool-builder.yaml
will be appended to the base configuration. (1)
- See "What happens during a
config-update
job?" for implementation details.
For each provider used in the Nodepool builder configuration, nodepool must be able to find the required connection credentials. Please refer the deployment documentation about setting up provider secrets.
disk-image-builder¶
Due to the security restrictions related to the OpenShift platform, the use of disk-image-builder is not possible. Thus we do not recommand its usage in the context of the sf-operator
.
dib-ansible¶
dib-ansible
(1) is an alternative dib-cmd wrapper that we provide within the sf-operator
project. It is a dib-cmd
wrapper to the ansible-playbook
command.
- For implementation details, the wrapper can be found at controllers/static/nodepool/dib-ansible.py
We recommend using dib-ansible
to externalize the image build process on at least one image builder machine.
To define a diskimage
using dib-ansible
use the following in nodepool/nodepool-builder.yaml
:
diskimages:
- dib-cmd: /usr/local/bin/dib-ansible my-cloud-image.yaml
formats:
- raw
name: my-cloud-image
username: zuul-worker
The image build playbook file my-cloud-image.yaml
must be defined into the nodepool/dib-ansible/
directory.
Here is an example of an image build playbook:
- name: My cloud image build playbook
hosts: image-builder
vars:
built_image_path: /var/lib/builder/cache/my-cloud-image
tasks:
- debug:
msg: "Building {{ image_output }}"
- name: Copy Zuul public key on the image-builder to integrate it on the built cloud image
copy:
src: /var/lib/zuul-ssh-key/pub
dest: /tmp/zuul-ssh-key.pub
# Build steps begin from here
# - name: Build task 1
# shell: true
# - name: Build task 2
# shell: true
# Build steps end here
# Set final image path based on the expected image type
- set_fact:
final_image_path: "{{ image_output }}.raw"
when: raw_type | default(false)
- set_fact:
final_image_path: "{{ image_output }}.qcow2"
when: qcow2_type | default(false)
# Synchronize back the image from the image-builder to the nodepool-builder
- ansible.posix.synchronize:
mode: pull
# src: is on the image-builder
src: "{{ built_image_path }}"
# dest: is on the nodepool-builder pod
dest: "{{ final_image_path }}"
Here are the available variables and their meaning:
- image_output: contains the path of the image the builder expects to find under its build directory. The file suffix is not part of the provided path.
- qcow2_type: is a boolean specifying if the built image format is
qcow2
. - raw_type: is a boolean specifying if the built image format is
raw
.
Note
Zuul needs to authenticate via SSH onto Virtual Machines spawned from built cloud images. Thus, the Zuul SSH public key should be added as
an authorized key for the user Zuul will connect to. The Zuul SSH public key is available on the nodepool-builder
into the file
/var/lib/zuul-ssh-key/pub
. A cloud image build playbook can read that file to prepare a cloud image.
Finally we need an inventory.yaml
file. It must be defined into nodepool/dib-ansible/inventory.yaml
:
Note
Nodepool builder must be able to connect via SSH to your image-builder machine. Thus please refer to the section Get the Nodepool builder SSH public key.
Once these three files nodepool/dib-ansible/inventory.yaml
, nodepool/dib-ansible/my-cloud-image.yaml
and nodepool/nodepool-builder.yaml
are merged into the Software Factory config
repository and the config-update
has succeeded then Nodepool will run the build proces.
SSH connection issues with an image-builder host?
At the first connection attempt of the nodepool-builder
to an image-builder
host, Ansible will refuse to connect because the SSH Host key is not known. Please refer to the section Accept an image-builder's SSH Host key.
The image builds status can be consulted by accessing this endpoint: https://<fqdn>/nodepool/api/dib-image-list
.
The image builds logs can be consulted by accessing this endpoint: https://<fqdn>/nodepool/builds/
.